00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 975 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3642 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.199 > git --version # 'git version 2.39.2' 00:00:00.199 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.249 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.260 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.271 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.271 > git config core.sparsecheckout # timeout=10 00:00:06.281 > git read-tree -mu HEAD # timeout=10 00:00:06.297 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.314 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.314 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.440 [Pipeline] Start of Pipeline 00:00:06.454 [Pipeline] library 00:00:06.456 Loading library shm_lib@master 00:00:06.456 Library shm_lib@master is cached. Copying from home. 00:00:06.506 [Pipeline] node 00:00:06.516 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.518 [Pipeline] { 00:00:06.528 [Pipeline] catchError 00:00:06.529 [Pipeline] { 00:00:06.541 [Pipeline] wrap 00:00:06.549 [Pipeline] { 00:00:06.557 [Pipeline] stage 00:00:06.559 [Pipeline] { (Prologue) 00:00:06.576 [Pipeline] echo 00:00:06.577 Node: VM-host-SM0 00:00:06.583 [Pipeline] cleanWs 00:00:06.597 [WS-CLEANUP] Deleting project workspace... 00:00:06.597 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.603 [WS-CLEANUP] done 00:00:06.762 [Pipeline] setCustomBuildProperty 00:00:06.855 [Pipeline] httpRequest 00:00:07.215 [Pipeline] echo 00:00:07.216 Sorcerer 10.211.164.20 is alive 00:00:07.224 [Pipeline] retry 00:00:07.225 [Pipeline] { 00:00:07.238 [Pipeline] httpRequest 00:00:07.241 HttpMethod: GET 00:00:07.241 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.242 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.259 Response Code: HTTP/1.1 200 OK 00:00:07.260 Success: Status code 200 is in the accepted range: 200,404 00:00:07.260 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.988 [Pipeline] } 00:00:28.006 [Pipeline] // retry 00:00:28.014 [Pipeline] sh 00:00:28.300 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.315 [Pipeline] httpRequest 00:00:28.702 [Pipeline] echo 00:00:28.704 Sorcerer 10.211.164.20 is alive 00:00:28.713 [Pipeline] retry 00:00:28.715 [Pipeline] { 00:00:28.730 [Pipeline] httpRequest 00:00:28.735 HttpMethod: GET 00:00:28.736 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:28.736 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:28.745 Response Code: HTTP/1.1 200 OK 00:00:28.746 Success: Status code 200 is in the accepted range: 200,404 00:00:28.747 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:53.999 [Pipeline] } 00:00:54.016 [Pipeline] // retry 00:00:54.023 [Pipeline] sh 00:00:54.307 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:56.853 [Pipeline] sh 00:00:57.138 + git -C spdk log --oneline -n5 00:00:57.138 c13c99a5e test: Various fixes for Fedora40 00:00:57.138 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:57.138 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:57.138 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:57.138 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:57.159 [Pipeline] withCredentials 00:00:57.171 > git --version # timeout=10 00:00:57.184 > git --version # 'git version 2.39.2' 00:00:57.201 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:57.203 [Pipeline] { 00:00:57.213 [Pipeline] retry 00:00:57.216 [Pipeline] { 00:00:57.233 [Pipeline] sh 00:00:57.517 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:57.788 [Pipeline] } 00:00:57.810 [Pipeline] // retry 00:00:57.816 [Pipeline] } 00:00:57.836 [Pipeline] // withCredentials 00:00:57.847 [Pipeline] httpRequest 00:00:58.363 [Pipeline] echo 00:00:58.365 Sorcerer 10.211.164.20 is alive 00:00:58.376 [Pipeline] retry 00:00:58.378 [Pipeline] { 00:00:58.395 [Pipeline] httpRequest 00:00:58.400 HttpMethod: GET 00:00:58.401 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:58.401 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:58.402 Response Code: HTTP/1.1 200 OK 00:00:58.403 Success: Status code 200 is in the accepted range: 200,404 00:00:58.404 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:08.542 [Pipeline] } 00:01:08.553 [Pipeline] // retry 00:01:08.558 [Pipeline] sh 00:01:08.834 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:10.224 [Pipeline] sh 00:01:10.505 + git -C dpdk log --oneline -n5 00:01:10.505 eeb0605f11 version: 23.11.0 00:01:10.505 238778122a doc: update release notes for 23.11 00:01:10.505 46aa6b3cfc doc: fix description of RSS features 00:01:10.505 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:10.505 7e421ae345 devtools: support skipping forbid rule check 00:01:10.522 [Pipeline] writeFile 00:01:10.537 [Pipeline] sh 00:01:10.820 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:10.832 [Pipeline] sh 00:01:11.114 + cat autorun-spdk.conf 00:01:11.114 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.114 SPDK_TEST_NVMF=1 00:01:11.114 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.114 SPDK_TEST_USDT=1 00:01:11.114 SPDK_RUN_UBSAN=1 00:01:11.114 SPDK_TEST_NVMF_MDNS=1 00:01:11.114 NET_TYPE=virt 00:01:11.114 SPDK_JSONRPC_GO_CLIENT=1 00:01:11.114 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:11.114 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:11.114 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.122 RUN_NIGHTLY=1 00:01:11.124 [Pipeline] } 00:01:11.139 [Pipeline] // stage 00:01:11.154 [Pipeline] stage 00:01:11.156 [Pipeline] { (Run VM) 00:01:11.168 [Pipeline] sh 00:01:11.450 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:11.450 + echo 'Start stage prepare_nvme.sh' 00:01:11.450 Start stage prepare_nvme.sh 00:01:11.450 + [[ -n 0 ]] 00:01:11.450 + disk_prefix=ex0 00:01:11.450 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:11.450 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:11.450 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:11.450 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.450 ++ SPDK_TEST_NVMF=1 00:01:11.450 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.450 ++ SPDK_TEST_USDT=1 00:01:11.450 ++ SPDK_RUN_UBSAN=1 00:01:11.450 ++ SPDK_TEST_NVMF_MDNS=1 00:01:11.450 ++ NET_TYPE=virt 00:01:11.450 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:11.450 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:11.450 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:11.450 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.450 ++ RUN_NIGHTLY=1 00:01:11.450 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:11.450 + nvme_files=() 00:01:11.450 + declare -A nvme_files 00:01:11.450 + backend_dir=/var/lib/libvirt/images/backends 00:01:11.450 + nvme_files['nvme.img']=5G 00:01:11.450 + nvme_files['nvme-cmb.img']=5G 00:01:11.450 + nvme_files['nvme-multi0.img']=4G 00:01:11.450 + nvme_files['nvme-multi1.img']=4G 00:01:11.450 + nvme_files['nvme-multi2.img']=4G 00:01:11.450 + nvme_files['nvme-openstack.img']=8G 00:01:11.450 + nvme_files['nvme-zns.img']=5G 00:01:11.450 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:11.450 + (( SPDK_TEST_FTL == 1 )) 00:01:11.450 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:11.450 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:11.450 + for nvme in "${!nvme_files[@]}" 00:01:11.450 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:11.450 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.450 + for nvme in "${!nvme_files[@]}" 00:01:11.450 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:11.450 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.450 + for nvme in "${!nvme_files[@]}" 00:01:11.450 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:11.450 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:11.450 + for nvme in "${!nvme_files[@]}" 00:01:11.450 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:11.450 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.450 + for nvme in "${!nvme_files[@]}" 00:01:11.450 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:11.450 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.450 + for nvme in "${!nvme_files[@]}" 00:01:11.450 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:11.709 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.709 + for nvme in "${!nvme_files[@]}" 00:01:11.709 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:11.709 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.709 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:11.709 + echo 'End stage prepare_nvme.sh' 00:01:11.709 End stage prepare_nvme.sh 00:01:11.721 [Pipeline] sh 00:01:12.002 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:12.002 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:12.002 00:01:12.002 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:12.002 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:12.002 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:12.002 HELP=0 00:01:12.002 DRY_RUN=0 00:01:12.002 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:12.002 NVME_DISKS_TYPE=nvme,nvme, 00:01:12.002 NVME_AUTO_CREATE=0 00:01:12.002 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:12.002 NVME_CMB=,, 00:01:12.002 NVME_PMR=,, 00:01:12.002 NVME_ZNS=,, 00:01:12.002 NVME_MS=,, 00:01:12.002 NVME_FDP=,, 00:01:12.002 SPDK_VAGRANT_DISTRO=fedora39 00:01:12.002 SPDK_VAGRANT_VMCPU=10 00:01:12.002 SPDK_VAGRANT_VMRAM=12288 00:01:12.002 SPDK_VAGRANT_PROVIDER=libvirt 00:01:12.002 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:12.002 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:12.002 SPDK_OPENSTACK_NETWORK=0 00:01:12.002 VAGRANT_PACKAGE_BOX=0 00:01:12.003 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:12.003 FORCE_DISTRO=true 00:01:12.003 VAGRANT_BOX_VERSION= 00:01:12.003 EXTRA_VAGRANTFILES= 00:01:12.003 NIC_MODEL=e1000 00:01:12.003 00:01:12.003 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:12.003 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:14.537 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.474 ==> default: Creating image (snapshot of base box volume). 00:01:15.474 ==> default: Creating domain with the following settings... 00:01:15.474 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731941258_9cbe7b9a096c84e35db1 00:01:15.474 ==> default: -- Domain type: kvm 00:01:15.474 ==> default: -- Cpus: 10 00:01:15.474 ==> default: -- Feature: acpi 00:01:15.474 ==> default: -- Feature: apic 00:01:15.474 ==> default: -- Feature: pae 00:01:15.474 ==> default: -- Memory: 12288M 00:01:15.474 ==> default: -- Memory Backing: hugepages: 00:01:15.474 ==> default: -- Management MAC: 00:01:15.474 ==> default: -- Loader: 00:01:15.474 ==> default: -- Nvram: 00:01:15.474 ==> default: -- Base box: spdk/fedora39 00:01:15.474 ==> default: -- Storage pool: default 00:01:15.474 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731941258_9cbe7b9a096c84e35db1.img (20G) 00:01:15.474 ==> default: -- Volume Cache: default 00:01:15.474 ==> default: -- Kernel: 00:01:15.474 ==> default: -- Initrd: 00:01:15.474 ==> default: -- Graphics Type: vnc 00:01:15.474 ==> default: -- Graphics Port: -1 00:01:15.474 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.474 ==> default: -- Graphics Password: Not defined 00:01:15.474 ==> default: -- Video Type: cirrus 00:01:15.474 ==> default: -- Video VRAM: 9216 00:01:15.474 ==> default: -- Sound Type: 00:01:15.474 ==> default: -- Keymap: en-us 00:01:15.474 ==> default: -- TPM Path: 00:01:15.474 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.474 ==> default: -- Command line args: 00:01:15.474 ==> default: -> value=-device, 00:01:15.474 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:15.474 ==> default: -> value=-drive, 00:01:15.474 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:15.474 ==> default: -> value=-device, 00:01:15.474 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.474 ==> default: -> value=-device, 00:01:15.474 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:15.474 ==> default: -> value=-drive, 00:01:15.474 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:15.474 ==> default: -> value=-device, 00:01:15.474 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.474 ==> default: -> value=-drive, 00:01:15.474 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:15.474 ==> default: -> value=-device, 00:01:15.474 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.474 ==> default: -> value=-drive, 00:01:15.474 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:15.474 ==> default: -> value=-device, 00:01:15.474 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.733 ==> default: Creating shared folders metadata... 00:01:15.733 ==> default: Starting domain. 00:01:17.636 ==> default: Waiting for domain to get an IP address... 00:01:32.514 ==> default: Waiting for SSH to become available... 00:01:33.451 ==> default: Configuring and enabling network interfaces... 00:01:38.725 default: SSH address: 192.168.121.99:22 00:01:38.725 default: SSH username: vagrant 00:01:38.726 default: SSH auth method: private key 00:01:40.631 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:47.196 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:53.807 ==> default: Mounting SSHFS shared folder... 00:01:55.184 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.184 ==> default: Checking Mount.. 00:01:56.561 ==> default: Folder Successfully Mounted! 00:01:56.561 ==> default: Running provisioner: file... 00:01:57.499 default: ~/.gitconfig => .gitconfig 00:01:57.758 00:01:57.758 SUCCESS! 00:01:57.758 00:01:57.758 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:57.758 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.758 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:57.758 00:01:57.769 [Pipeline] } 00:01:57.786 [Pipeline] // stage 00:01:57.796 [Pipeline] dir 00:01:57.796 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:01:57.798 [Pipeline] { 00:01:57.810 [Pipeline] catchError 00:01:57.812 [Pipeline] { 00:01:57.824 [Pipeline] sh 00:01:58.104 + vagrant ssh-config --host vagrant 00:01:58.104 + sed -ne /^Host/,$p 00:01:58.104 + tee ssh_conf 00:02:00.640 Host vagrant 00:02:00.640 HostName 192.168.121.99 00:02:00.640 User vagrant 00:02:00.640 Port 22 00:02:00.640 UserKnownHostsFile /dev/null 00:02:00.640 StrictHostKeyChecking no 00:02:00.640 PasswordAuthentication no 00:02:00.640 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:00.640 IdentitiesOnly yes 00:02:00.640 LogLevel FATAL 00:02:00.640 ForwardAgent yes 00:02:00.640 ForwardX11 yes 00:02:00.640 00:02:00.654 [Pipeline] withEnv 00:02:00.656 [Pipeline] { 00:02:00.670 [Pipeline] sh 00:02:00.949 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:00.950 source /etc/os-release 00:02:00.950 [[ -e /image.version ]] && img=$(< /image.version) 00:02:00.950 # Minimal, systemd-like check. 00:02:00.950 if [[ -e /.dockerenv ]]; then 00:02:00.950 # Clear garbage from the node's name: 00:02:00.950 # agt-er_autotest_547-896 -> autotest_547-896 00:02:00.950 # $HOSTNAME is the actual container id 00:02:00.950 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:00.950 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:00.950 # We can assume this is a mount from a host where container is running, 00:02:00.950 # so fetch its hostname to easily identify the target swarm worker. 00:02:00.950 container="$(< /etc/hostname) ($agent)" 00:02:00.950 else 00:02:00.950 # Fallback 00:02:00.950 container=$agent 00:02:00.950 fi 00:02:00.950 fi 00:02:00.950 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:00.950 00:02:01.221 [Pipeline] } 00:02:01.237 [Pipeline] // withEnv 00:02:01.246 [Pipeline] setCustomBuildProperty 00:02:01.261 [Pipeline] stage 00:02:01.263 [Pipeline] { (Tests) 00:02:01.280 [Pipeline] sh 00:02:01.561 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:01.834 [Pipeline] sh 00:02:02.115 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.389 [Pipeline] timeout 00:02:02.389 Timeout set to expire in 1 hr 0 min 00:02:02.391 [Pipeline] { 00:02:02.405 [Pipeline] sh 00:02:02.685 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:03.253 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:03.266 [Pipeline] sh 00:02:03.545 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:03.819 [Pipeline] sh 00:02:04.102 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:04.377 [Pipeline] sh 00:02:04.657 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:04.916 ++ readlink -f spdk_repo 00:02:04.916 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:04.916 + [[ -n /home/vagrant/spdk_repo ]] 00:02:04.916 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:04.916 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:04.916 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:04.916 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:04.916 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:04.916 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:04.916 + cd /home/vagrant/spdk_repo 00:02:04.916 + source /etc/os-release 00:02:04.916 ++ NAME='Fedora Linux' 00:02:04.916 ++ VERSION='39 (Cloud Edition)' 00:02:04.916 ++ ID=fedora 00:02:04.916 ++ VERSION_ID=39 00:02:04.916 ++ VERSION_CODENAME= 00:02:04.916 ++ PLATFORM_ID=platform:f39 00:02:04.916 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:04.916 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:04.916 ++ LOGO=fedora-logo-icon 00:02:04.916 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:04.916 ++ HOME_URL=https://fedoraproject.org/ 00:02:04.916 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:04.916 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:04.916 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:04.916 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:04.916 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:04.916 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:04.916 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:04.916 ++ SUPPORT_END=2024-11-12 00:02:04.916 ++ VARIANT='Cloud Edition' 00:02:04.916 ++ VARIANT_ID=cloud 00:02:04.916 + uname -a 00:02:04.916 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:04.916 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:04.916 Hugepages 00:02:04.916 node hugesize free / total 00:02:04.916 node0 1048576kB 0 / 0 00:02:04.916 node0 2048kB 0 / 0 00:02:04.916 00:02:04.916 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:04.916 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:04.916 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:05.176 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:05.176 + rm -f /tmp/spdk-ld-path 00:02:05.176 + source autorun-spdk.conf 00:02:05.176 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.176 ++ SPDK_TEST_NVMF=1 00:02:05.176 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.176 ++ SPDK_TEST_USDT=1 00:02:05.176 ++ SPDK_RUN_UBSAN=1 00:02:05.176 ++ SPDK_TEST_NVMF_MDNS=1 00:02:05.176 ++ NET_TYPE=virt 00:02:05.176 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:05.176 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.176 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.176 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.176 ++ RUN_NIGHTLY=1 00:02:05.176 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.176 + [[ -n '' ]] 00:02:05.176 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:05.176 + for M in /var/spdk/build-*-manifest.txt 00:02:05.176 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:05.176 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.176 + for M in /var/spdk/build-*-manifest.txt 00:02:05.176 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.176 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.176 + for M in /var/spdk/build-*-manifest.txt 00:02:05.176 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.176 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.176 ++ uname 00:02:05.176 + [[ Linux == \L\i\n\u\x ]] 00:02:05.176 + sudo dmesg -T 00:02:05.176 + sudo dmesg --clear 00:02:05.176 + dmesg_pid=5974 00:02:05.176 + [[ Fedora Linux == FreeBSD ]] 00:02:05.176 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.176 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.176 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.176 + sudo dmesg -Tw 00:02:05.176 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.176 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.176 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.176 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.176 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.176 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.176 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.176 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.176 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.176 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.176 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.176 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:05.176 Test configuration: 00:02:05.176 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.176 SPDK_TEST_NVMF=1 00:02:05.176 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.176 SPDK_TEST_USDT=1 00:02:05.176 SPDK_RUN_UBSAN=1 00:02:05.176 SPDK_TEST_NVMF_MDNS=1 00:02:05.176 NET_TYPE=virt 00:02:05.176 SPDK_JSONRPC_GO_CLIENT=1 00:02:05.176 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.176 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.176 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.176 RUN_NIGHTLY=1 14:48:28 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:05.176 14:48:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:05.176 14:48:28 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.176 14:48:28 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.176 14:48:28 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.176 14:48:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.176 14:48:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.176 14:48:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.176 14:48:28 -- paths/export.sh@5 -- $ export PATH 00:02:05.176 14:48:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.176 14:48:28 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:05.176 14:48:28 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:05.176 14:48:28 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731941308.XXXXXX 00:02:05.176 14:48:28 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731941308.zeWZYL 00:02:05.176 14:48:28 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:05.176 14:48:28 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:05.176 14:48:28 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.176 14:48:28 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:05.176 14:48:28 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:05.176 14:48:28 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.176 14:48:28 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:05.176 14:48:28 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:05.176 14:48:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.436 14:48:28 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:05.436 14:48:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:05.436 14:48:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:05.436 14:48:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:05.436 14:48:28 -- spdk/autobuild.sh@16 -- $ date -u 00:02:05.436 Mon Nov 18 02:48:28 PM UTC 2024 00:02:05.436 14:48:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:05.436 LTS-67-gc13c99a5e 00:02:05.436 14:48:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:05.436 14:48:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:05.436 14:48:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:05.436 14:48:28 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:05.436 14:48:28 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:05.436 14:48:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.436 ************************************ 00:02:05.436 START TEST ubsan 00:02:05.436 ************************************ 00:02:05.436 using ubsan 00:02:05.436 14:48:28 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:05.436 00:02:05.436 real 0m0.000s 00:02:05.436 user 0m0.000s 00:02:05.436 sys 0m0.000s 00:02:05.436 14:48:28 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:05.436 14:48:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.436 ************************************ 00:02:05.436 END TEST ubsan 00:02:05.436 ************************************ 00:02:05.436 14:48:28 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:05.436 14:48:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:05.436 14:48:28 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:05.436 14:48:28 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:05.436 14:48:28 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:05.436 14:48:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.436 ************************************ 00:02:05.436 START TEST build_native_dpdk 00:02:05.436 ************************************ 00:02:05.436 14:48:28 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:05.436 14:48:28 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:05.436 14:48:28 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:05.436 14:48:28 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:05.436 14:48:28 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:05.436 14:48:28 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:05.436 14:48:28 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:05.436 14:48:28 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:05.436 14:48:28 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:05.436 14:48:28 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:05.436 14:48:28 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:05.436 14:48:28 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:05.436 14:48:28 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:05.436 14:48:28 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:05.436 14:48:28 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.436 14:48:28 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:05.436 14:48:28 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:05.436 14:48:28 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:05.436 eeb0605f11 version: 23.11.0 00:02:05.436 238778122a doc: update release notes for 23.11 00:02:05.436 46aa6b3cfc doc: fix description of RSS features 00:02:05.436 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:05.436 7e421ae345 devtools: support skipping forbid rule check 00:02:05.436 14:48:28 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:05.436 14:48:28 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:05.436 14:48:28 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:05.436 14:48:28 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:05.436 14:48:28 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:05.436 14:48:28 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:05.436 14:48:28 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:05.436 14:48:28 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:05.436 14:48:28 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:05.436 14:48:28 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:05.436 14:48:28 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:05.436 14:48:28 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:05.436 14:48:28 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:05.436 14:48:28 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:05.436 14:48:28 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:05.436 14:48:28 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:05.436 14:48:28 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:05.436 14:48:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.436 14:48:28 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:05.436 14:48:28 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:05.436 14:48:28 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:05.436 14:48:28 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:05.436 14:48:28 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:05.436 14:48:28 -- scripts/common.sh@343 -- $ case "$op" in 00:02:05.436 14:48:28 -- scripts/common.sh@344 -- $ : 1 00:02:05.436 14:48:28 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:05.436 14:48:28 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.436 14:48:28 -- scripts/common.sh@364 -- $ decimal 23 00:02:05.436 14:48:28 -- scripts/common.sh@352 -- $ local d=23 00:02:05.436 14:48:28 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.436 14:48:28 -- scripts/common.sh@354 -- $ echo 23 00:02:05.436 14:48:28 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:05.436 14:48:28 -- scripts/common.sh@365 -- $ decimal 21 00:02:05.436 14:48:28 -- scripts/common.sh@352 -- $ local d=21 00:02:05.436 14:48:28 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:05.436 14:48:28 -- scripts/common.sh@354 -- $ echo 21 00:02:05.436 14:48:28 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:05.436 14:48:28 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:05.436 14:48:28 -- scripts/common.sh@366 -- $ return 1 00:02:05.436 14:48:28 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:05.436 patching file config/rte_config.h 00:02:05.436 Hunk #1 succeeded at 60 (offset 1 line). 00:02:05.436 14:48:28 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:05.436 14:48:28 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:05.436 14:48:28 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:05.436 14:48:28 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:05.436 14:48:28 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:05.436 14:48:28 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:05.436 14:48:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:05.436 14:48:28 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:05.436 14:48:28 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:05.436 14:48:28 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:05.436 14:48:28 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:05.436 14:48:28 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:05.436 14:48:28 -- scripts/common.sh@343 -- $ case "$op" in 00:02:05.436 14:48:28 -- scripts/common.sh@344 -- $ : 1 00:02:05.436 14:48:28 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:05.436 14:48:28 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:05.436 14:48:28 -- scripts/common.sh@364 -- $ decimal 23 00:02:05.436 14:48:28 -- scripts/common.sh@352 -- $ local d=23 00:02:05.436 14:48:28 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:05.436 14:48:28 -- scripts/common.sh@354 -- $ echo 23 00:02:05.436 14:48:28 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:05.436 14:48:28 -- scripts/common.sh@365 -- $ decimal 24 00:02:05.436 14:48:28 -- scripts/common.sh@352 -- $ local d=24 00:02:05.436 14:48:28 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:05.436 14:48:28 -- scripts/common.sh@354 -- $ echo 24 00:02:05.436 14:48:28 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:05.436 14:48:28 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:05.436 14:48:28 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:05.436 14:48:28 -- scripts/common.sh@367 -- $ return 0 00:02:05.436 14:48:28 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:05.436 patching file lib/pcapng/rte_pcapng.c 00:02:05.436 14:48:28 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:05.436 14:48:28 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:05.436 14:48:28 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:05.436 14:48:28 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:05.436 14:48:28 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:10.707 The Meson build system 00:02:10.707 Version: 1.5.0 00:02:10.707 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:10.707 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:10.707 Build type: native build 00:02:10.707 Program cat found: YES (/usr/bin/cat) 00:02:10.707 Project name: DPDK 00:02:10.707 Project version: 23.11.0 00:02:10.707 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:10.707 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:10.707 Host machine cpu family: x86_64 00:02:10.707 Host machine cpu: x86_64 00:02:10.707 Message: ## Building in Developer Mode ## 00:02:10.707 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:10.707 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:10.707 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:10.707 Program python3 found: YES (/usr/bin/python3) 00:02:10.707 Program cat found: YES (/usr/bin/cat) 00:02:10.707 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:10.707 Compiler for C supports arguments -march=native: YES 00:02:10.707 Checking for size of "void *" : 8 00:02:10.707 Checking for size of "void *" : 8 (cached) 00:02:10.707 Library m found: YES 00:02:10.707 Library numa found: YES 00:02:10.707 Has header "numaif.h" : YES 00:02:10.707 Library fdt found: NO 00:02:10.707 Library execinfo found: NO 00:02:10.707 Has header "execinfo.h" : YES 00:02:10.707 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:10.707 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:10.707 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:10.707 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:10.707 Run-time dependency openssl found: YES 3.1.1 00:02:10.707 Run-time dependency libpcap found: YES 1.10.4 00:02:10.707 Has header "pcap.h" with dependency libpcap: YES 00:02:10.707 Compiler for C supports arguments -Wcast-qual: YES 00:02:10.707 Compiler for C supports arguments -Wdeprecated: YES 00:02:10.707 Compiler for C supports arguments -Wformat: YES 00:02:10.707 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:10.707 Compiler for C supports arguments -Wformat-security: NO 00:02:10.707 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.707 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:10.707 Compiler for C supports arguments -Wnested-externs: YES 00:02:10.707 Compiler for C supports arguments -Wold-style-definition: YES 00:02:10.707 Compiler for C supports arguments -Wpointer-arith: YES 00:02:10.707 Compiler for C supports arguments -Wsign-compare: YES 00:02:10.707 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:10.707 Compiler for C supports arguments -Wundef: YES 00:02:10.707 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.707 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:10.707 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:10.707 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.707 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:10.707 Program objdump found: YES (/usr/bin/objdump) 00:02:10.707 Compiler for C supports arguments -mavx512f: YES 00:02:10.707 Checking if "AVX512 checking" compiles: YES 00:02:10.707 Fetching value of define "__SSE4_2__" : 1 00:02:10.707 Fetching value of define "__AES__" : 1 00:02:10.707 Fetching value of define "__AVX__" : 1 00:02:10.707 Fetching value of define "__AVX2__" : 1 00:02:10.707 Fetching value of define "__AVX512BW__" : (undefined) 00:02:10.707 Fetching value of define "__AVX512CD__" : (undefined) 00:02:10.707 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:10.707 Fetching value of define "__AVX512F__" : (undefined) 00:02:10.707 Fetching value of define "__AVX512VL__" : (undefined) 00:02:10.707 Fetching value of define "__PCLMUL__" : 1 00:02:10.707 Fetching value of define "__RDRND__" : 1 00:02:10.707 Fetching value of define "__RDSEED__" : 1 00:02:10.707 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:10.707 Fetching value of define "__znver1__" : (undefined) 00:02:10.707 Fetching value of define "__znver2__" : (undefined) 00:02:10.707 Fetching value of define "__znver3__" : (undefined) 00:02:10.707 Fetching value of define "__znver4__" : (undefined) 00:02:10.707 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:10.707 Message: lib/log: Defining dependency "log" 00:02:10.707 Message: lib/kvargs: Defining dependency "kvargs" 00:02:10.707 Message: lib/telemetry: Defining dependency "telemetry" 00:02:10.707 Checking for function "getentropy" : NO 00:02:10.707 Message: lib/eal: Defining dependency "eal" 00:02:10.707 Message: lib/ring: Defining dependency "ring" 00:02:10.707 Message: lib/rcu: Defining dependency "rcu" 00:02:10.707 Message: lib/mempool: Defining dependency "mempool" 00:02:10.707 Message: lib/mbuf: Defining dependency "mbuf" 00:02:10.707 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:10.707 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.707 Compiler for C supports arguments -mpclmul: YES 00:02:10.707 Compiler for C supports arguments -maes: YES 00:02:10.707 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.707 Compiler for C supports arguments -mavx512bw: YES 00:02:10.707 Compiler for C supports arguments -mavx512dq: YES 00:02:10.707 Compiler for C supports arguments -mavx512vl: YES 00:02:10.707 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:10.707 Compiler for C supports arguments -mavx2: YES 00:02:10.707 Compiler for C supports arguments -mavx: YES 00:02:10.707 Message: lib/net: Defining dependency "net" 00:02:10.707 Message: lib/meter: Defining dependency "meter" 00:02:10.707 Message: lib/ethdev: Defining dependency "ethdev" 00:02:10.707 Message: lib/pci: Defining dependency "pci" 00:02:10.707 Message: lib/cmdline: Defining dependency "cmdline" 00:02:10.707 Message: lib/metrics: Defining dependency "metrics" 00:02:10.707 Message: lib/hash: Defining dependency "hash" 00:02:10.707 Message: lib/timer: Defining dependency "timer" 00:02:10.707 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.707 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:10.707 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:10.707 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:10.707 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:10.707 Message: lib/acl: Defining dependency "acl" 00:02:10.707 Message: lib/bbdev: Defining dependency "bbdev" 00:02:10.707 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:10.707 Run-time dependency libelf found: YES 0.191 00:02:10.707 Message: lib/bpf: Defining dependency "bpf" 00:02:10.707 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:10.707 Message: lib/compressdev: Defining dependency "compressdev" 00:02:10.707 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:10.707 Message: lib/distributor: Defining dependency "distributor" 00:02:10.707 Message: lib/dmadev: Defining dependency "dmadev" 00:02:10.707 Message: lib/efd: Defining dependency "efd" 00:02:10.707 Message: lib/eventdev: Defining dependency "eventdev" 00:02:10.707 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:10.707 Message: lib/gpudev: Defining dependency "gpudev" 00:02:10.707 Message: lib/gro: Defining dependency "gro" 00:02:10.707 Message: lib/gso: Defining dependency "gso" 00:02:10.707 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:10.707 Message: lib/jobstats: Defining dependency "jobstats" 00:02:10.707 Message: lib/latencystats: Defining dependency "latencystats" 00:02:10.707 Message: lib/lpm: Defining dependency "lpm" 00:02:10.707 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.707 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:10.707 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:10.707 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:10.707 Message: lib/member: Defining dependency "member" 00:02:10.707 Message: lib/pcapng: Defining dependency "pcapng" 00:02:10.707 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:10.707 Message: lib/power: Defining dependency "power" 00:02:10.707 Message: lib/rawdev: Defining dependency "rawdev" 00:02:10.707 Message: lib/regexdev: Defining dependency "regexdev" 00:02:10.707 Message: lib/mldev: Defining dependency "mldev" 00:02:10.707 Message: lib/rib: Defining dependency "rib" 00:02:10.707 Message: lib/reorder: Defining dependency "reorder" 00:02:10.707 Message: lib/sched: Defining dependency "sched" 00:02:10.707 Message: lib/security: Defining dependency "security" 00:02:10.707 Message: lib/stack: Defining dependency "stack" 00:02:10.707 Has header "linux/userfaultfd.h" : YES 00:02:10.708 Has header "linux/vduse.h" : YES 00:02:10.708 Message: lib/vhost: Defining dependency "vhost" 00:02:10.708 Message: lib/ipsec: Defining dependency "ipsec" 00:02:10.708 Message: lib/pdcp: Defining dependency "pdcp" 00:02:10.708 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:10.708 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:10.708 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:10.708 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:10.708 Message: lib/fib: Defining dependency "fib" 00:02:10.708 Message: lib/port: Defining dependency "port" 00:02:10.708 Message: lib/pdump: Defining dependency "pdump" 00:02:10.708 Message: lib/table: Defining dependency "table" 00:02:10.708 Message: lib/pipeline: Defining dependency "pipeline" 00:02:10.708 Message: lib/graph: Defining dependency "graph" 00:02:10.708 Message: lib/node: Defining dependency "node" 00:02:10.708 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.665 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.665 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.665 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.665 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:12.665 Compiler for C supports arguments -Wno-unused-value: YES 00:02:12.665 Compiler for C supports arguments -Wno-format: YES 00:02:12.665 Compiler for C supports arguments -Wno-format-security: YES 00:02:12.665 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:12.665 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.665 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:12.665 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:12.665 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:12.665 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.665 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:12.665 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:12.665 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:12.665 Has header "sys/epoll.h" : YES 00:02:12.665 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.665 Configuring doxy-api-html.conf using configuration 00:02:12.665 Configuring doxy-api-man.conf using configuration 00:02:12.665 Program mandb found: YES (/usr/bin/mandb) 00:02:12.665 Program sphinx-build found: NO 00:02:12.665 Configuring rte_build_config.h using configuration 00:02:12.665 Message: 00:02:12.665 ================= 00:02:12.665 Applications Enabled 00:02:12.665 ================= 00:02:12.665 00:02:12.665 apps: 00:02:12.666 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:12.666 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:12.666 test-pmd, test-regex, test-sad, test-security-perf, 00:02:12.666 00:02:12.666 Message: 00:02:12.666 ================= 00:02:12.666 Libraries Enabled 00:02:12.666 ================= 00:02:12.666 00:02:12.666 libs: 00:02:12.666 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.666 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:12.666 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:12.666 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:12.666 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:12.666 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:12.666 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:12.666 00:02:12.666 00:02:12.666 Message: 00:02:12.666 =============== 00:02:12.666 Drivers Enabled 00:02:12.666 =============== 00:02:12.666 00:02:12.666 common: 00:02:12.666 00:02:12.666 bus: 00:02:12.666 pci, vdev, 00:02:12.666 mempool: 00:02:12.666 ring, 00:02:12.666 dma: 00:02:12.666 00:02:12.666 net: 00:02:12.666 i40e, 00:02:12.666 raw: 00:02:12.666 00:02:12.666 crypto: 00:02:12.666 00:02:12.666 compress: 00:02:12.666 00:02:12.666 regex: 00:02:12.666 00:02:12.666 ml: 00:02:12.666 00:02:12.666 vdpa: 00:02:12.666 00:02:12.666 event: 00:02:12.666 00:02:12.666 baseband: 00:02:12.666 00:02:12.666 gpu: 00:02:12.666 00:02:12.666 00:02:12.666 Message: 00:02:12.666 ================= 00:02:12.666 Content Skipped 00:02:12.666 ================= 00:02:12.666 00:02:12.666 apps: 00:02:12.666 00:02:12.666 libs: 00:02:12.666 00:02:12.666 drivers: 00:02:12.666 common/cpt: not in enabled drivers build config 00:02:12.666 common/dpaax: not in enabled drivers build config 00:02:12.666 common/iavf: not in enabled drivers build config 00:02:12.666 common/idpf: not in enabled drivers build config 00:02:12.666 common/mvep: not in enabled drivers build config 00:02:12.666 common/octeontx: not in enabled drivers build config 00:02:12.666 bus/auxiliary: not in enabled drivers build config 00:02:12.666 bus/cdx: not in enabled drivers build config 00:02:12.666 bus/dpaa: not in enabled drivers build config 00:02:12.666 bus/fslmc: not in enabled drivers build config 00:02:12.666 bus/ifpga: not in enabled drivers build config 00:02:12.666 bus/platform: not in enabled drivers build config 00:02:12.666 bus/vmbus: not in enabled drivers build config 00:02:12.666 common/cnxk: not in enabled drivers build config 00:02:12.666 common/mlx5: not in enabled drivers build config 00:02:12.666 common/nfp: not in enabled drivers build config 00:02:12.666 common/qat: not in enabled drivers build config 00:02:12.666 common/sfc_efx: not in enabled drivers build config 00:02:12.666 mempool/bucket: not in enabled drivers build config 00:02:12.666 mempool/cnxk: not in enabled drivers build config 00:02:12.666 mempool/dpaa: not in enabled drivers build config 00:02:12.666 mempool/dpaa2: not in enabled drivers build config 00:02:12.666 mempool/octeontx: not in enabled drivers build config 00:02:12.666 mempool/stack: not in enabled drivers build config 00:02:12.666 dma/cnxk: not in enabled drivers build config 00:02:12.666 dma/dpaa: not in enabled drivers build config 00:02:12.666 dma/dpaa2: not in enabled drivers build config 00:02:12.666 dma/hisilicon: not in enabled drivers build config 00:02:12.666 dma/idxd: not in enabled drivers build config 00:02:12.666 dma/ioat: not in enabled drivers build config 00:02:12.666 dma/skeleton: not in enabled drivers build config 00:02:12.666 net/af_packet: not in enabled drivers build config 00:02:12.666 net/af_xdp: not in enabled drivers build config 00:02:12.666 net/ark: not in enabled drivers build config 00:02:12.666 net/atlantic: not in enabled drivers build config 00:02:12.666 net/avp: not in enabled drivers build config 00:02:12.666 net/axgbe: not in enabled drivers build config 00:02:12.666 net/bnx2x: not in enabled drivers build config 00:02:12.666 net/bnxt: not in enabled drivers build config 00:02:12.666 net/bonding: not in enabled drivers build config 00:02:12.666 net/cnxk: not in enabled drivers build config 00:02:12.666 net/cpfl: not in enabled drivers build config 00:02:12.666 net/cxgbe: not in enabled drivers build config 00:02:12.666 net/dpaa: not in enabled drivers build config 00:02:12.666 net/dpaa2: not in enabled drivers build config 00:02:12.666 net/e1000: not in enabled drivers build config 00:02:12.666 net/ena: not in enabled drivers build config 00:02:12.666 net/enetc: not in enabled drivers build config 00:02:12.666 net/enetfec: not in enabled drivers build config 00:02:12.666 net/enic: not in enabled drivers build config 00:02:12.666 net/failsafe: not in enabled drivers build config 00:02:12.666 net/fm10k: not in enabled drivers build config 00:02:12.666 net/gve: not in enabled drivers build config 00:02:12.666 net/hinic: not in enabled drivers build config 00:02:12.666 net/hns3: not in enabled drivers build config 00:02:12.666 net/iavf: not in enabled drivers build config 00:02:12.666 net/ice: not in enabled drivers build config 00:02:12.666 net/idpf: not in enabled drivers build config 00:02:12.666 net/igc: not in enabled drivers build config 00:02:12.666 net/ionic: not in enabled drivers build config 00:02:12.666 net/ipn3ke: not in enabled drivers build config 00:02:12.666 net/ixgbe: not in enabled drivers build config 00:02:12.666 net/mana: not in enabled drivers build config 00:02:12.666 net/memif: not in enabled drivers build config 00:02:12.666 net/mlx4: not in enabled drivers build config 00:02:12.666 net/mlx5: not in enabled drivers build config 00:02:12.666 net/mvneta: not in enabled drivers build config 00:02:12.666 net/mvpp2: not in enabled drivers build config 00:02:12.666 net/netvsc: not in enabled drivers build config 00:02:12.666 net/nfb: not in enabled drivers build config 00:02:12.666 net/nfp: not in enabled drivers build config 00:02:12.666 net/ngbe: not in enabled drivers build config 00:02:12.666 net/null: not in enabled drivers build config 00:02:12.666 net/octeontx: not in enabled drivers build config 00:02:12.666 net/octeon_ep: not in enabled drivers build config 00:02:12.666 net/pcap: not in enabled drivers build config 00:02:12.666 net/pfe: not in enabled drivers build config 00:02:12.666 net/qede: not in enabled drivers build config 00:02:12.666 net/ring: not in enabled drivers build config 00:02:12.666 net/sfc: not in enabled drivers build config 00:02:12.666 net/softnic: not in enabled drivers build config 00:02:12.666 net/tap: not in enabled drivers build config 00:02:12.666 net/thunderx: not in enabled drivers build config 00:02:12.666 net/txgbe: not in enabled drivers build config 00:02:12.666 net/vdev_netvsc: not in enabled drivers build config 00:02:12.666 net/vhost: not in enabled drivers build config 00:02:12.666 net/virtio: not in enabled drivers build config 00:02:12.666 net/vmxnet3: not in enabled drivers build config 00:02:12.666 raw/cnxk_bphy: not in enabled drivers build config 00:02:12.666 raw/cnxk_gpio: not in enabled drivers build config 00:02:12.666 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:12.666 raw/ifpga: not in enabled drivers build config 00:02:12.666 raw/ntb: not in enabled drivers build config 00:02:12.666 raw/skeleton: not in enabled drivers build config 00:02:12.666 crypto/armv8: not in enabled drivers build config 00:02:12.666 crypto/bcmfs: not in enabled drivers build config 00:02:12.666 crypto/caam_jr: not in enabled drivers build config 00:02:12.666 crypto/ccp: not in enabled drivers build config 00:02:12.666 crypto/cnxk: not in enabled drivers build config 00:02:12.666 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.666 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.666 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.666 crypto/mlx5: not in enabled drivers build config 00:02:12.666 crypto/mvsam: not in enabled drivers build config 00:02:12.666 crypto/nitrox: not in enabled drivers build config 00:02:12.666 crypto/null: not in enabled drivers build config 00:02:12.666 crypto/octeontx: not in enabled drivers build config 00:02:12.666 crypto/openssl: not in enabled drivers build config 00:02:12.666 crypto/scheduler: not in enabled drivers build config 00:02:12.666 crypto/uadk: not in enabled drivers build config 00:02:12.666 crypto/virtio: not in enabled drivers build config 00:02:12.666 compress/isal: not in enabled drivers build config 00:02:12.666 compress/mlx5: not in enabled drivers build config 00:02:12.666 compress/octeontx: not in enabled drivers build config 00:02:12.666 compress/zlib: not in enabled drivers build config 00:02:12.666 regex/mlx5: not in enabled drivers build config 00:02:12.666 regex/cn9k: not in enabled drivers build config 00:02:12.666 ml/cnxk: not in enabled drivers build config 00:02:12.666 vdpa/ifc: not in enabled drivers build config 00:02:12.666 vdpa/mlx5: not in enabled drivers build config 00:02:12.666 vdpa/nfp: not in enabled drivers build config 00:02:12.666 vdpa/sfc: not in enabled drivers build config 00:02:12.666 event/cnxk: not in enabled drivers build config 00:02:12.666 event/dlb2: not in enabled drivers build config 00:02:12.666 event/dpaa: not in enabled drivers build config 00:02:12.666 event/dpaa2: not in enabled drivers build config 00:02:12.666 event/dsw: not in enabled drivers build config 00:02:12.666 event/opdl: not in enabled drivers build config 00:02:12.666 event/skeleton: not in enabled drivers build config 00:02:12.666 event/sw: not in enabled drivers build config 00:02:12.666 event/octeontx: not in enabled drivers build config 00:02:12.666 baseband/acc: not in enabled drivers build config 00:02:12.666 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:12.666 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:12.666 baseband/la12xx: not in enabled drivers build config 00:02:12.666 baseband/null: not in enabled drivers build config 00:02:12.666 baseband/turbo_sw: not in enabled drivers build config 00:02:12.667 gpu/cuda: not in enabled drivers build config 00:02:12.667 00:02:12.667 00:02:12.667 Build targets in project: 220 00:02:12.667 00:02:12.667 DPDK 23.11.0 00:02:12.667 00:02:12.667 User defined options 00:02:12.667 libdir : lib 00:02:12.667 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:12.667 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:12.667 c_link_args : 00:02:12.667 enable_docs : false 00:02:12.667 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:12.667 enable_kmods : false 00:02:12.667 machine : native 00:02:12.667 tests : false 00:02:12.667 00:02:12.667 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.667 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:12.667 14:48:35 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:12.667 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:12.667 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.667 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.667 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.667 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.667 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.667 [6/710] Linking static target lib/librte_kvargs.a 00:02:12.667 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.926 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.926 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.926 [10/710] Linking static target lib/librte_log.a 00:02:12.926 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.185 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.185 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.185 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.185 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.185 [16/710] Linking target lib/librte_log.so.24.0 00:02:13.185 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.444 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.444 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.444 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.704 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.704 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:13.704 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.704 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:13.704 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:13.704 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.963 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.963 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.963 [29/710] Linking static target lib/librte_telemetry.a 00:02:13.963 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.963 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.963 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.222 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.222 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.222 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.222 [36/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.222 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.222 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.222 [39/710] Linking target lib/librte_telemetry.so.24.0 00:02:14.222 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.481 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.481 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.481 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.481 [44/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:14.481 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.740 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.740 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.740 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.740 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.000 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.000 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.000 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.000 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.000 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.259 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.259 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.259 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.259 [58/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.259 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.259 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.518 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.518 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.518 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.518 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.518 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.518 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.777 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.777 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.035 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.035 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.035 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.035 [72/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.035 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.035 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.035 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.035 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.035 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.295 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.295 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.295 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.554 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.554 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.554 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.554 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.554 [85/710] Linking static target lib/librte_ring.a 00:02:16.814 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.814 [87/710] Linking static target lib/librte_eal.a 00:02:16.814 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.814 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.814 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.074 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.074 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.074 [93/710] Linking static target lib/librte_mempool.a 00:02:17.074 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.074 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.333 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.333 [97/710] Linking static target lib/librte_rcu.a 00:02:17.333 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.333 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.592 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.592 [101/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.592 [102/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.592 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.592 [104/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.852 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.852 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.852 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.852 [108/710] Linking static target lib/librte_mbuf.a 00:02:17.852 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.852 [110/710] Linking static target lib/librte_net.a 00:02:18.111 [111/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.111 [112/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.111 [113/710] Linking static target lib/librte_meter.a 00:02:18.111 [114/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.370 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.370 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.370 [117/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.370 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.370 [119/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.940 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.940 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.200 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.200 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.200 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.200 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.200 [126/710] Linking static target lib/librte_pci.a 00:02:19.459 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.459 [128/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.459 [129/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.459 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.459 [131/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.459 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.718 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.718 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.718 [135/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.718 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.718 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.718 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.718 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.718 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.977 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.977 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.977 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.977 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.977 [145/710] Linking static target lib/librte_cmdline.a 00:02:20.237 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:20.237 [147/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.237 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:20.237 [149/710] Linking static target lib/librte_metrics.a 00:02:20.496 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.755 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.755 [152/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.755 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.755 [154/710] Linking static target lib/librte_timer.a 00:02:20.755 [155/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.015 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.583 [157/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:21.584 [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:21.584 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:21.584 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:22.152 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.152 [162/710] Linking static target lib/librte_ethdev.a 00:02:22.152 [163/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:22.152 [164/710] Linking static target lib/librte_bitratestats.a 00:02:22.152 [165/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:22.152 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:22.411 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.411 [168/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.411 [169/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.411 [170/710] Linking target lib/librte_eal.so.24.0 00:02:22.411 [171/710] Linking static target lib/librte_hash.a 00:02:22.411 [172/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:22.411 [173/710] Linking static target lib/librte_bbdev.a 00:02:22.411 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:22.670 [175/710] Linking target lib/librte_ring.so.24.0 00:02:22.670 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:22.670 [177/710] Linking target lib/librte_rcu.so.24.0 00:02:22.670 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:22.670 [179/710] Linking target lib/librte_mempool.so.24.0 00:02:22.670 [180/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:22.930 [181/710] Linking target lib/librte_meter.so.24.0 00:02:22.930 [182/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:22.930 [183/710] Linking target lib/librte_pci.so.24.0 00:02:22.930 [184/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:22.930 [185/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:22.930 [186/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:22.930 [187/710] Linking target lib/librte_mbuf.so.24.0 00:02:22.930 [188/710] Linking static target lib/acl/libavx2_tmp.a 00:02:22.930 [189/710] Linking target lib/librte_timer.so.24.0 00:02:22.930 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:22.930 [191/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.930 [192/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:22.930 [193/710] Linking static target lib/acl/libavx512_tmp.a 00:02:22.930 [194/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:22.930 [195/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.930 [196/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:22.930 [197/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:23.189 [198/710] Linking target lib/librte_net.so.24.0 00:02:23.189 [199/710] Linking target lib/librte_bbdev.so.24.0 00:02:23.189 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:23.189 [201/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:23.189 [202/710] Linking target lib/librte_cmdline.so.24.0 00:02:23.189 [203/710] Linking target lib/librte_hash.so.24.0 00:02:23.189 [204/710] Linking static target lib/librte_acl.a 00:02:23.449 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:23.449 [206/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:23.449 [207/710] Linking static target lib/librte_cfgfile.a 00:02:23.449 [208/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.449 [209/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:23.449 [210/710] Linking target lib/librte_acl.so.24.0 00:02:23.708 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:23.708 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.708 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:23.708 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:02:23.708 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:23.967 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:23.967 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:23.967 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.227 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.227 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:24.227 [221/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.227 [222/710] Linking static target lib/librte_bpf.a 00:02:24.227 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.227 [224/710] Linking static target lib/librte_compressdev.a 00:02:24.486 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.486 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.486 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.745 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:24.745 [229/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.745 [230/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:24.745 [231/710] Linking static target lib/librte_distributor.a 00:02:24.745 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.745 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:25.005 [234/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.005 [235/710] Linking static target lib/librte_dmadev.a 00:02:25.005 [236/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.005 [237/710] Linking target lib/librte_distributor.so.24.0 00:02:25.264 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:25.264 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.264 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:25.523 [241/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:25.523 [242/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.793 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:25.793 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:25.793 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:25.793 [246/710] Linking static target lib/librte_efd.a 00:02:26.056 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.056 [248/710] Linking static target lib/librte_cryptodev.a 00:02:26.056 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:26.056 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.056 [251/710] Linking target lib/librte_efd.so.24.0 00:02:26.625 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.625 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:26.625 [254/710] Linking target lib/librte_ethdev.so.24.0 00:02:26.625 [255/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:26.625 [256/710] Linking static target lib/librte_dispatcher.a 00:02:26.625 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:26.625 [258/710] Linking target lib/librte_metrics.so.24.0 00:02:26.625 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:26.625 [260/710] Linking target lib/librte_bpf.so.24.0 00:02:26.884 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:26.884 [262/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.884 [263/710] Linking static target lib/librte_gpudev.a 00:02:26.884 [264/710] Linking target lib/librte_bitratestats.so.24.0 00:02:26.884 [265/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:26.884 [266/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:26.884 [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:26.884 [268/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.144 [269/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:27.144 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:27.144 [271/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.144 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:02:27.403 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:27.663 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.663 [275/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:27.663 [276/710] Linking target lib/librte_gpudev.so.24.0 00:02:27.663 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:27.663 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:27.663 [279/710] Linking static target lib/librte_eventdev.a 00:02:27.663 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:27.663 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:27.663 [282/710] Linking static target lib/librte_gro.a 00:02:27.663 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:27.663 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:27.663 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:27.922 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.922 [287/710] Linking target lib/librte_gro.so.24.0 00:02:27.922 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:28.182 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:28.182 [290/710] Linking static target lib/librte_gso.a 00:02:28.182 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:28.182 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:28.442 [293/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.442 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:28.442 [295/710] Linking target lib/librte_gso.so.24.0 00:02:28.442 [296/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:28.442 [297/710] Linking static target lib/librte_jobstats.a 00:02:28.442 [298/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:28.442 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:28.442 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:28.442 [301/710] Linking static target lib/librte_ip_frag.a 00:02:28.701 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:28.701 [303/710] Linking static target lib/librte_latencystats.a 00:02:28.701 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.701 [305/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.701 [306/710] Linking target lib/librte_jobstats.so.24.0 00:02:28.959 [307/710] Linking target lib/librte_ip_frag.so.24.0 00:02:28.959 [308/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.959 [309/710] Linking target lib/librte_latencystats.so.24.0 00:02:28.959 [310/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:28.959 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:28.959 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:28.959 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:28.959 [314/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:28.959 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:29.217 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:29.217 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:29.475 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.475 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:29.475 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:29.475 [321/710] Linking static target lib/librte_lpm.a 00:02:29.733 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:29.733 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:29.733 [324/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.734 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:02:29.734 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:29.734 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:29.734 [328/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:29.734 [329/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.992 [330/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:29.992 [331/710] Linking static target lib/librte_pcapng.a 00:02:29.992 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:29.992 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:29.992 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:29.992 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.992 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:30.250 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.250 [338/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:30.250 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.509 [340/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:30.509 [341/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.509 [342/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.509 [343/710] Linking static target lib/librte_power.a 00:02:30.509 [344/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:30.509 [345/710] Linking static target lib/librte_member.a 00:02:30.509 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:30.509 [347/710] Linking static target lib/librte_regexdev.a 00:02:30.509 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:30.509 [349/710] Linking static target lib/librte_rawdev.a 00:02:30.767 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:30.767 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:30.767 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.767 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:30.767 [354/710] Linking target lib/librte_member.so.24.0 00:02:31.025 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:31.025 [356/710] Linking static target lib/librte_mldev.a 00:02:31.025 [357/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:31.025 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.025 [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.025 [360/710] Linking target lib/librte_rawdev.so.24.0 00:02:31.025 [361/710] Linking target lib/librte_power.so.24.0 00:02:31.284 [362/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.284 [363/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:31.284 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:31.284 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:31.543 [366/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:31.543 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.543 [368/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.543 [369/710] Linking static target lib/librte_reorder.a 00:02:31.543 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:31.543 [371/710] Linking static target lib/librte_rib.a 00:02:31.803 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:31.803 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:31.803 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:31.803 [375/710] Linking static target lib/librte_stack.a 00:02:31.803 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.803 [377/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.803 [378/710] Linking target lib/librte_reorder.so.24.0 00:02:31.803 [379/710] Linking static target lib/librte_security.a 00:02:32.063 [380/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.063 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.063 [382/710] Linking target lib/librte_stack.so.24.0 00:02:32.063 [383/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:32.063 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.063 [385/710] Linking target lib/librte_rib.so.24.0 00:02:32.063 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:32.063 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:32.322 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:32.322 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.322 [390/710] Linking target lib/librte_security.so.24.0 00:02:32.322 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:32.322 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:32.322 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:32.582 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:32.582 [395/710] Linking static target lib/librte_sched.a 00:02:32.841 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.841 [397/710] Linking target lib/librte_sched.so.24.0 00:02:32.841 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.841 [399/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:33.100 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:33.100 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.100 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:33.359 [403/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:33.359 [404/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:33.618 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:33.618 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:33.877 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:33.877 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:34.137 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:34.137 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:34.137 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:34.137 [412/710] Linking static target lib/librte_ipsec.a 00:02:34.137 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:34.397 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.397 [415/710] Linking target lib/librte_ipsec.so.24.0 00:02:34.397 [416/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:34.397 [417/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:34.397 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:34.397 [419/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:34.656 [420/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:34.656 [421/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:34.656 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:34.656 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:35.225 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:35.225 [425/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:35.225 [426/710] Linking static target lib/librte_pdcp.a 00:02:35.484 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:35.484 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:35.484 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:35.484 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:35.484 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:35.484 [432/710] Linking static target lib/librte_fib.a 00:02:35.748 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.748 [434/710] Linking target lib/librte_pdcp.so.24.0 00:02:35.748 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.748 [436/710] Linking target lib/librte_fib.so.24.0 00:02:35.748 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:36.328 [438/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:36.328 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:36.328 [440/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:36.328 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:36.601 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:36.601 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:36.601 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:36.859 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:36.859 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:36.859 [447/710] Linking static target lib/librte_port.a 00:02:37.118 [448/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.118 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:37.118 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:37.118 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:37.118 [452/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:37.377 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:37.377 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:37.377 [455/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.377 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:37.377 [457/710] Linking static target lib/librte_pdump.a 00:02:37.377 [458/710] Linking target lib/librte_port.so.24.0 00:02:37.636 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:37.636 [460/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:37.636 [461/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.636 [462/710] Linking target lib/librte_pdump.so.24.0 00:02:37.895 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:38.153 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:38.153 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:38.153 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:38.154 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:38.154 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:38.412 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:38.413 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:38.413 [471/710] Linking static target lib/librte_table.a 00:02:38.672 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:38.672 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:38.930 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.189 [475/710] Linking target lib/librte_table.so.24.0 00:02:39.189 [476/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:39.189 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:39.189 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:39.448 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:39.448 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:39.707 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:39.707 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:39.966 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:39.966 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:39.966 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:39.966 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:40.533 [487/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:40.533 [488/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:40.533 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:40.533 [490/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:40.533 [491/710] Linking static target lib/librte_graph.a 00:02:40.533 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:40.792 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:41.052 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.052 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:41.052 [496/710] Linking target lib/librte_graph.so.24.0 00:02:41.052 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:41.316 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:41.316 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:41.574 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:41.574 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:41.575 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:41.575 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:41.833 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:41.833 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:41.833 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:42.092 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:42.092 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:42.351 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:42.351 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:42.351 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:42.351 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:42.351 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:42.610 [514/710] Linking static target lib/librte_node.a 00:02:42.610 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:42.870 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.870 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:42.870 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:42.870 [519/710] Linking target lib/librte_node.so.24.0 00:02:42.870 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:42.870 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:42.870 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:42.870 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.870 [524/710] Linking static target drivers/librte_bus_vdev.a 00:02:43.129 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:43.129 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.129 [527/710] Linking static target drivers/librte_bus_pci.a 00:02:43.129 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.129 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.388 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.388 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:43.388 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:43.388 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:43.388 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:43.388 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:43.388 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:43.388 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:43.647 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.647 [539/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:43.647 [540/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.647 [541/710] Linking static target drivers/librte_mempool_ring.a 00:02:43.647 [542/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:43.647 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:43.647 [544/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:43.906 [545/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:43.906 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:44.165 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:44.424 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:44.424 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:44.424 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:44.683 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:45.250 [552/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:45.250 [553/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:45.250 [554/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:45.509 [555/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:45.509 [556/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:45.510 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:45.769 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:46.027 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:46.027 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:46.286 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:46.286 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:46.544 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:46.803 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:46.803 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:46.803 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:47.371 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:47.371 [568/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:47.371 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:47.371 [570/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:47.371 [571/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:47.371 [572/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:47.629 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:47.889 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.889 [575/710] Linking static target lib/librte_vhost.a 00:02:47.889 [576/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:47.889 [577/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:47.889 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:48.148 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:48.148 [580/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:48.148 [581/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:48.148 [582/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:48.407 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:48.407 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:48.407 [585/710] Linking static target drivers/librte_net_i40e.a 00:02:48.666 [586/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:48.666 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:48.666 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:48.666 [589/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:48.666 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:48.666 [591/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:48.666 [592/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:48.924 [593/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.924 [594/710] Linking target lib/librte_vhost.so.24.0 00:02:49.182 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.182 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:49.182 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:49.182 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:49.441 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:49.700 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:49.700 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:49.958 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:49.958 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:49.958 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:49.958 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:49.958 [606/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:49.958 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:50.527 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:50.786 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:50.786 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:50.786 [611/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:50.786 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:50.786 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:50.786 [614/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:50.786 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:51.045 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:51.045 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:51.304 [618/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:51.304 [619/710] Linking static target lib/librte_pipeline.a 00:02:51.304 [620/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:51.304 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:51.563 [622/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:51.563 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:51.563 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:51.822 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:51.822 [626/710] Linking target app/dpdk-dumpcap 00:02:51.822 [627/710] Linking target app/dpdk-graph 00:02:51.822 [628/710] Linking target app/dpdk-pdump 00:02:52.080 [629/710] Linking target app/dpdk-proc-info 00:02:52.080 [630/710] Linking target app/dpdk-test-acl 00:02:52.340 [631/710] Linking target app/dpdk-test-cmdline 00:02:52.340 [632/710] Linking target app/dpdk-test-compress-perf 00:02:52.340 [633/710] Linking target app/dpdk-test-crypto-perf 00:02:52.340 [634/710] Linking target app/dpdk-test-dma-perf 00:02:52.340 [635/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:53.276 [636/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:53.276 [637/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:53.276 [638/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:53.276 [639/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:53.276 [640/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:53.535 [641/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:53.535 [642/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:53.535 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:53.793 [644/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:53.793 [645/710] Linking target app/dpdk-test-fib 00:02:53.793 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:53.793 [647/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.793 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:53.793 [649/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:53.793 [650/710] Linking target lib/librte_pipeline.so.24.0 00:02:54.052 [651/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:54.052 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:54.311 [653/710] Linking target app/dpdk-test-gpudev 00:02:54.311 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:54.311 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:54.311 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:54.311 [657/710] Linking target app/dpdk-test-eventdev 00:02:54.311 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:54.570 [659/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:54.570 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:54.829 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:54.829 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:54.829 [663/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:54.829 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:54.829 [665/710] Linking target app/dpdk-test-flow-perf 00:02:55.088 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:55.088 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.088 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.347 [669/710] Linking target app/dpdk-test-bbdev 00:02:55.347 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:55.347 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:55.347 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:55.347 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:55.606 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:55.606 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:55.863 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:55.863 [677/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:56.122 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:56.122 [679/710] Linking target app/dpdk-test-mldev 00:02:56.381 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:56.381 [681/710] Linking target app/dpdk-test-pipeline 00:02:56.381 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:56.381 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:56.950 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:56.950 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:56.950 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:56.950 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:57.209 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:57.209 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:57.209 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:57.467 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:57.468 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:57.727 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:57.986 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:57.986 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:58.244 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:58.244 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:58.503 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:58.503 [699/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:58.503 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:58.504 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:58.762 [702/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:58.762 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:59.023 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:59.023 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:59.023 [706/710] Linking target app/dpdk-test-sad 00:02:59.023 [707/710] Linking target app/dpdk-test-regex 00:02:59.322 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:59.322 [709/710] Linking target app/dpdk-testpmd 00:02:59.912 [710/710] Linking target app/dpdk-test-security-perf 00:02:59.912 14:49:23 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:59.912 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:59.912 [0/1] Installing files. 00:03:00.174 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.174 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.175 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.176 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.177 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.178 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.179 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.179 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.179 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.179 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.179 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.179 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.179 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.179 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.179 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.438 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.439 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.702 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.702 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.702 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.702 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:00.702 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.702 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:00.702 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.702 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:00.702 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.702 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:00.702 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.702 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.703 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.704 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:00.705 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:00.705 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:00.705 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:00.705 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:00.705 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:00.705 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:00.705 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:00.705 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:00.705 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:00.705 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:00.705 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:00.705 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:00.705 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:00.705 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:00.706 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:00.706 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:00.706 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:00.706 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:00.706 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:00.706 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:00.706 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:00.706 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:00.706 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:00.706 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:00.706 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:00.706 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:00.706 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:00.706 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:00.706 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:00.706 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:00.706 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:00.706 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:00.706 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:00.706 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:00.706 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:00.706 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:00.706 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:00.706 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:00.706 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:00.706 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:00.706 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:00.706 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:00.706 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:00.706 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:00.706 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:00.706 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:00.706 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:00.706 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:00.706 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:00.706 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:00.706 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:00.706 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:00.706 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:00.706 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:00.706 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:00.706 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:00.706 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:00.706 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:00.706 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:00.706 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:00.706 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:00.706 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:00.706 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:00.706 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:00.706 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:00.706 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:00.706 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:00.706 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:00.706 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:00.706 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:00.706 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:00.706 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:00.706 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:00.706 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:00.706 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:00.706 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:00.706 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:00.706 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:00.706 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:00.706 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:00.706 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:00.706 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:00.706 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:00.706 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:00.706 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:00.706 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:00.706 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:00.706 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:00.706 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:00.706 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:00.706 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:00.706 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:00.706 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:00.706 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:00.706 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:00.706 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:00.706 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:00.706 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:00.706 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:00.706 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:00.706 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:00.706 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:00.706 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:00.706 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:00.706 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:00.706 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:00.706 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:00.706 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:00.706 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:00.706 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:00.706 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:00.706 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:00.706 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:00.706 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:00.706 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:00.706 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:00.706 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:00.706 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:00.706 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:00.706 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:00.706 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:00.707 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:00.707 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:00.707 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:00.707 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:00.707 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:00.707 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:00.707 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:00.707 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:00.707 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:00.707 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:00.707 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:00.707 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:00.707 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:00.966 14:49:24 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:00.966 14:49:24 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:00.966 14:49:24 -- common/autobuild_common.sh@203 -- $ cat 00:03:00.966 14:49:24 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:00.966 00:03:00.966 real 0m55.483s 00:03:00.966 user 6m34.860s 00:03:00.966 sys 1m6.797s 00:03:00.966 14:49:24 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:00.966 14:49:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.966 ************************************ 00:03:00.966 END TEST build_native_dpdk 00:03:00.966 ************************************ 00:03:00.966 14:49:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:00.966 14:49:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:00.966 14:49:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:00.966 14:49:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:00.966 14:49:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:00.966 14:49:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:00.966 14:49:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:00.966 14:49:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:00.966 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:01.230 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.230 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:01.230 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:01.797 Using 'verbs' RDMA provider 00:03:16.938 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:29.147 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:29.406 go version go1.21.1 linux/amd64 00:03:29.665 Creating mk/config.mk...done. 00:03:29.924 Creating mk/cc.flags.mk...done. 00:03:29.924 Type 'make' to build. 00:03:29.924 14:49:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:29.924 14:49:53 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:29.924 14:49:53 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:29.924 14:49:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:29.924 ************************************ 00:03:29.924 START TEST make 00:03:29.924 ************************************ 00:03:29.924 14:49:53 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:30.183 make[1]: Nothing to be done for 'all'. 00:03:52.124 CC lib/log/log.o 00:03:52.124 CC lib/ut/ut.o 00:03:52.124 CC lib/log/log_flags.o 00:03:52.124 CC lib/log/log_deprecated.o 00:03:52.124 CC lib/ut_mock/mock.o 00:03:52.124 LIB libspdk_ut.a 00:03:52.124 LIB libspdk_ut_mock.a 00:03:52.124 LIB libspdk_log.a 00:03:52.124 SO libspdk_ut.so.1.0 00:03:52.124 SO libspdk_ut_mock.so.5.0 00:03:52.124 SO libspdk_log.so.6.1 00:03:52.124 SYMLINK libspdk_ut.so 00:03:52.124 SYMLINK libspdk_ut_mock.so 00:03:52.124 SYMLINK libspdk_log.so 00:03:52.124 CC lib/util/base64.o 00:03:52.124 CC lib/util/bit_array.o 00:03:52.124 CC lib/util/cpuset.o 00:03:52.124 CC lib/dma/dma.o 00:03:52.124 CC lib/util/crc32.o 00:03:52.124 CC lib/util/crc32c.o 00:03:52.124 CC lib/util/crc16.o 00:03:52.124 CC lib/ioat/ioat.o 00:03:52.124 CXX lib/trace_parser/trace.o 00:03:52.124 CC lib/vfio_user/host/vfio_user_pci.o 00:03:52.124 CC lib/vfio_user/host/vfio_user.o 00:03:52.124 CC lib/util/crc32_ieee.o 00:03:52.124 CC lib/util/crc64.o 00:03:52.124 CC lib/util/dif.o 00:03:52.124 LIB libspdk_dma.a 00:03:52.124 CC lib/util/fd.o 00:03:52.124 SO libspdk_dma.so.3.0 00:03:52.124 CC lib/util/file.o 00:03:52.124 CC lib/util/hexlify.o 00:03:52.124 SYMLINK libspdk_dma.so 00:03:52.124 CC lib/util/iov.o 00:03:52.124 CC lib/util/math.o 00:03:52.124 CC lib/util/pipe.o 00:03:52.124 LIB libspdk_ioat.a 00:03:52.124 LIB libspdk_vfio_user.a 00:03:52.124 SO libspdk_ioat.so.6.0 00:03:52.124 CC lib/util/strerror_tls.o 00:03:52.124 SO libspdk_vfio_user.so.4.0 00:03:52.124 CC lib/util/string.o 00:03:52.124 SYMLINK libspdk_ioat.so 00:03:52.124 CC lib/util/uuid.o 00:03:52.124 SYMLINK libspdk_vfio_user.so 00:03:52.124 CC lib/util/fd_group.o 00:03:52.124 CC lib/util/xor.o 00:03:52.124 CC lib/util/zipf.o 00:03:52.383 LIB libspdk_util.a 00:03:52.383 SO libspdk_util.so.8.0 00:03:52.641 SYMLINK libspdk_util.so 00:03:52.641 LIB libspdk_trace_parser.a 00:03:52.641 SO libspdk_trace_parser.so.4.0 00:03:52.641 CC lib/idxd/idxd.o 00:03:52.641 CC lib/idxd/idxd_user.o 00:03:52.641 CC lib/idxd/idxd_kernel.o 00:03:52.641 CC lib/rdma/common.o 00:03:52.641 CC lib/env_dpdk/env.o 00:03:52.641 CC lib/conf/conf.o 00:03:52.641 CC lib/rdma/rdma_verbs.o 00:03:52.641 CC lib/json/json_parse.o 00:03:52.641 CC lib/vmd/vmd.o 00:03:52.641 SYMLINK libspdk_trace_parser.so 00:03:52.641 CC lib/json/json_util.o 00:03:52.900 CC lib/vmd/led.o 00:03:52.900 CC lib/env_dpdk/memory.o 00:03:52.900 LIB libspdk_conf.a 00:03:52.900 CC lib/env_dpdk/pci.o 00:03:52.900 CC lib/env_dpdk/init.o 00:03:52.900 SO libspdk_conf.so.5.0 00:03:52.900 LIB libspdk_rdma.a 00:03:52.900 CC lib/json/json_write.o 00:03:52.900 CC lib/env_dpdk/threads.o 00:03:52.900 SYMLINK libspdk_conf.so 00:03:52.900 SO libspdk_rdma.so.5.0 00:03:52.900 CC lib/env_dpdk/pci_ioat.o 00:03:53.160 SYMLINK libspdk_rdma.so 00:03:53.160 CC lib/env_dpdk/pci_virtio.o 00:03:53.160 CC lib/env_dpdk/pci_vmd.o 00:03:53.160 CC lib/env_dpdk/pci_idxd.o 00:03:53.160 LIB libspdk_idxd.a 00:03:53.160 CC lib/env_dpdk/pci_event.o 00:03:53.160 SO libspdk_idxd.so.11.0 00:03:53.160 CC lib/env_dpdk/sigbus_handler.o 00:03:53.160 CC lib/env_dpdk/pci_dpdk.o 00:03:53.160 LIB libspdk_json.a 00:03:53.160 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:53.160 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:53.160 SYMLINK libspdk_idxd.so 00:03:53.160 SO libspdk_json.so.5.1 00:03:53.160 LIB libspdk_vmd.a 00:03:53.419 SO libspdk_vmd.so.5.0 00:03:53.419 SYMLINK libspdk_json.so 00:03:53.419 SYMLINK libspdk_vmd.so 00:03:53.419 CC lib/jsonrpc/jsonrpc_server.o 00:03:53.419 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:53.419 CC lib/jsonrpc/jsonrpc_client.o 00:03:53.419 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:53.678 LIB libspdk_jsonrpc.a 00:03:53.678 SO libspdk_jsonrpc.so.5.1 00:03:53.936 SYMLINK libspdk_jsonrpc.so 00:03:53.936 LIB libspdk_env_dpdk.a 00:03:53.936 CC lib/rpc/rpc.o 00:03:53.936 SO libspdk_env_dpdk.so.13.0 00:03:54.195 SYMLINK libspdk_env_dpdk.so 00:03:54.195 LIB libspdk_rpc.a 00:03:54.195 SO libspdk_rpc.so.5.0 00:03:54.195 SYMLINK libspdk_rpc.so 00:03:54.454 CC lib/trace/trace.o 00:03:54.454 CC lib/trace/trace_flags.o 00:03:54.454 CC lib/trace/trace_rpc.o 00:03:54.454 CC lib/sock/sock.o 00:03:54.454 CC lib/sock/sock_rpc.o 00:03:54.454 CC lib/notify/notify.o 00:03:54.454 CC lib/notify/notify_rpc.o 00:03:54.454 LIB libspdk_notify.a 00:03:54.454 SO libspdk_notify.so.5.0 00:03:54.454 LIB libspdk_trace.a 00:03:54.712 SO libspdk_trace.so.9.0 00:03:54.712 SYMLINK libspdk_notify.so 00:03:54.712 SYMLINK libspdk_trace.so 00:03:54.712 LIB libspdk_sock.a 00:03:54.712 SO libspdk_sock.so.8.0 00:03:54.971 CC lib/thread/thread.o 00:03:54.971 CC lib/thread/iobuf.o 00:03:54.971 SYMLINK libspdk_sock.so 00:03:54.971 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:54.971 CC lib/nvme/nvme_ctrlr.o 00:03:54.971 CC lib/nvme/nvme_fabric.o 00:03:54.971 CC lib/nvme/nvme_ns_cmd.o 00:03:54.971 CC lib/nvme/nvme_qpair.o 00:03:54.971 CC lib/nvme/nvme_ns.o 00:03:54.971 CC lib/nvme/nvme_pcie_common.o 00:03:54.971 CC lib/nvme/nvme_pcie.o 00:03:55.230 CC lib/nvme/nvme.o 00:03:55.798 CC lib/nvme/nvme_quirks.o 00:03:55.798 CC lib/nvme/nvme_transport.o 00:03:55.798 CC lib/nvme/nvme_discovery.o 00:03:55.798 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:55.798 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:55.798 CC lib/nvme/nvme_tcp.o 00:03:56.056 CC lib/nvme/nvme_opal.o 00:03:56.056 CC lib/nvme/nvme_io_msg.o 00:03:56.315 CC lib/nvme/nvme_poll_group.o 00:03:56.315 LIB libspdk_thread.a 00:03:56.315 SO libspdk_thread.so.9.0 00:03:56.315 CC lib/nvme/nvme_zns.o 00:03:56.315 SYMLINK libspdk_thread.so 00:03:56.315 CC lib/nvme/nvme_cuse.o 00:03:56.315 CC lib/nvme/nvme_vfio_user.o 00:03:56.315 CC lib/nvme/nvme_rdma.o 00:03:56.574 CC lib/accel/accel.o 00:03:56.574 CC lib/blob/blobstore.o 00:03:56.574 CC lib/blob/request.o 00:03:56.833 CC lib/accel/accel_rpc.o 00:03:56.833 CC lib/accel/accel_sw.o 00:03:57.092 CC lib/blob/zeroes.o 00:03:57.092 CC lib/init/json_config.o 00:03:57.092 CC lib/virtio/virtio.o 00:03:57.092 CC lib/init/subsystem.o 00:03:57.092 CC lib/virtio/virtio_vhost_user.o 00:03:57.092 CC lib/virtio/virtio_vfio_user.o 00:03:57.351 CC lib/virtio/virtio_pci.o 00:03:57.351 CC lib/blob/blob_bs_dev.o 00:03:57.351 CC lib/init/subsystem_rpc.o 00:03:57.351 CC lib/init/rpc.o 00:03:57.351 LIB libspdk_init.a 00:03:57.351 SO libspdk_init.so.4.0 00:03:57.610 LIB libspdk_virtio.a 00:03:57.610 SYMLINK libspdk_init.so 00:03:57.610 LIB libspdk_accel.a 00:03:57.610 SO libspdk_virtio.so.6.0 00:03:57.610 SO libspdk_accel.so.14.0 00:03:57.610 SYMLINK libspdk_virtio.so 00:03:57.610 SYMLINK libspdk_accel.so 00:03:57.610 CC lib/event/app.o 00:03:57.610 CC lib/event/reactor.o 00:03:57.610 CC lib/event/log_rpc.o 00:03:57.610 CC lib/event/app_rpc.o 00:03:57.610 CC lib/event/scheduler_static.o 00:03:57.610 LIB libspdk_nvme.a 00:03:57.869 CC lib/bdev/bdev.o 00:03:57.869 CC lib/bdev/bdev_rpc.o 00:03:57.869 CC lib/bdev/bdev_zone.o 00:03:57.869 CC lib/bdev/part.o 00:03:57.869 CC lib/bdev/scsi_nvme.o 00:03:57.869 SO libspdk_nvme.so.12.0 00:03:58.128 LIB libspdk_event.a 00:03:58.128 SO libspdk_event.so.12.0 00:03:58.128 SYMLINK libspdk_nvme.so 00:03:58.128 SYMLINK libspdk_event.so 00:03:59.062 LIB libspdk_blob.a 00:03:59.062 SO libspdk_blob.so.10.1 00:03:59.322 SYMLINK libspdk_blob.so 00:03:59.322 CC lib/blobfs/tree.o 00:03:59.322 CC lib/blobfs/blobfs.o 00:03:59.322 CC lib/lvol/lvol.o 00:03:59.890 LIB libspdk_bdev.a 00:03:59.890 SO libspdk_bdev.so.14.0 00:04:00.150 SYMLINK libspdk_bdev.so 00:04:00.150 LIB libspdk_blobfs.a 00:04:00.150 SO libspdk_blobfs.so.9.0 00:04:00.150 CC lib/nbd/nbd.o 00:04:00.150 CC lib/ftl/ftl_core.o 00:04:00.150 CC lib/nbd/nbd_rpc.o 00:04:00.150 CC lib/nvmf/ctrlr.o 00:04:00.150 CC lib/nvmf/ctrlr_discovery.o 00:04:00.150 CC lib/nvmf/ctrlr_bdev.o 00:04:00.150 CC lib/scsi/dev.o 00:04:00.150 CC lib/ublk/ublk.o 00:04:00.150 LIB libspdk_lvol.a 00:04:00.150 SYMLINK libspdk_blobfs.so 00:04:00.150 CC lib/ftl/ftl_init.o 00:04:00.150 SO libspdk_lvol.so.9.1 00:04:00.409 SYMLINK libspdk_lvol.so 00:04:00.409 CC lib/ftl/ftl_layout.o 00:04:00.409 CC lib/ftl/ftl_debug.o 00:04:00.409 CC lib/ublk/ublk_rpc.o 00:04:00.409 CC lib/scsi/lun.o 00:04:00.720 CC lib/scsi/port.o 00:04:00.720 LIB libspdk_nbd.a 00:04:00.720 CC lib/scsi/scsi.o 00:04:00.720 CC lib/ftl/ftl_io.o 00:04:00.720 SO libspdk_nbd.so.6.0 00:04:00.720 CC lib/nvmf/subsystem.o 00:04:00.720 CC lib/nvmf/nvmf.o 00:04:00.720 SYMLINK libspdk_nbd.so 00:04:00.720 CC lib/scsi/scsi_bdev.o 00:04:00.720 CC lib/ftl/ftl_sb.o 00:04:00.720 CC lib/ftl/ftl_l2p.o 00:04:00.720 CC lib/ftl/ftl_l2p_flat.o 00:04:01.007 LIB libspdk_ublk.a 00:04:01.007 CC lib/nvmf/nvmf_rpc.o 00:04:01.007 SO libspdk_ublk.so.2.0 00:04:01.007 CC lib/nvmf/transport.o 00:04:01.007 CC lib/nvmf/tcp.o 00:04:01.007 SYMLINK libspdk_ublk.so 00:04:01.007 CC lib/nvmf/rdma.o 00:04:01.007 CC lib/scsi/scsi_pr.o 00:04:01.007 CC lib/ftl/ftl_nv_cache.o 00:04:01.007 CC lib/scsi/scsi_rpc.o 00:04:01.266 CC lib/ftl/ftl_band.o 00:04:01.266 CC lib/scsi/task.o 00:04:01.526 LIB libspdk_scsi.a 00:04:01.526 CC lib/ftl/ftl_band_ops.o 00:04:01.526 CC lib/ftl/ftl_writer.o 00:04:01.526 SO libspdk_scsi.so.8.0 00:04:01.526 CC lib/ftl/ftl_rq.o 00:04:01.526 SYMLINK libspdk_scsi.so 00:04:01.526 CC lib/ftl/ftl_reloc.o 00:04:01.526 CC lib/ftl/ftl_l2p_cache.o 00:04:01.526 CC lib/ftl/ftl_p2l.o 00:04:01.785 CC lib/ftl/mngt/ftl_mngt.o 00:04:01.785 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:01.785 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:01.785 CC lib/iscsi/conn.o 00:04:01.785 CC lib/iscsi/init_grp.o 00:04:01.785 CC lib/vhost/vhost.o 00:04:02.044 CC lib/vhost/vhost_rpc.o 00:04:02.044 CC lib/vhost/vhost_scsi.o 00:04:02.044 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.044 CC lib/iscsi/iscsi.o 00:04:02.044 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.044 CC lib/vhost/vhost_blk.o 00:04:02.044 CC lib/iscsi/md5.o 00:04:02.303 CC lib/iscsi/param.o 00:04:02.303 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.303 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.303 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.562 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.562 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:02.562 CC lib/vhost/rte_vhost_user.o 00:04:02.562 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:02.562 CC lib/iscsi/portal_grp.o 00:04:02.562 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:02.822 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:02.822 CC lib/ftl/utils/ftl_conf.o 00:04:02.822 LIB libspdk_nvmf.a 00:04:02.822 CC lib/ftl/utils/ftl_md.o 00:04:02.822 CC lib/iscsi/tgt_node.o 00:04:02.822 CC lib/iscsi/iscsi_subsystem.o 00:04:02.822 SO libspdk_nvmf.so.17.0 00:04:02.822 CC lib/iscsi/iscsi_rpc.o 00:04:02.822 CC lib/iscsi/task.o 00:04:03.081 CC lib/ftl/utils/ftl_mempool.o 00:04:03.081 CC lib/ftl/utils/ftl_bitmap.o 00:04:03.081 SYMLINK libspdk_nvmf.so 00:04:03.081 CC lib/ftl/utils/ftl_property.o 00:04:03.081 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:03.081 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:03.340 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:03.340 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:03.340 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:03.340 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:03.340 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.340 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.340 LIB libspdk_iscsi.a 00:04:03.340 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.340 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.340 CC lib/ftl/base/ftl_base_dev.o 00:04:03.340 SO libspdk_iscsi.so.7.0 00:04:03.340 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.340 CC lib/ftl/ftl_trace.o 00:04:03.599 SYMLINK libspdk_iscsi.so 00:04:03.599 LIB libspdk_vhost.a 00:04:03.599 SO libspdk_vhost.so.7.1 00:04:03.599 LIB libspdk_ftl.a 00:04:03.599 SYMLINK libspdk_vhost.so 00:04:03.858 SO libspdk_ftl.so.8.0 00:04:04.117 SYMLINK libspdk_ftl.so 00:04:04.377 CC module/env_dpdk/env_dpdk_rpc.o 00:04:04.377 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:04.377 CC module/scheduler/gscheduler/gscheduler.o 00:04:04.377 CC module/accel/ioat/accel_ioat.o 00:04:04.377 CC module/accel/dsa/accel_dsa.o 00:04:04.377 CC module/blob/bdev/blob_bdev.o 00:04:04.377 CC module/sock/posix/posix.o 00:04:04.377 CC module/accel/error/accel_error.o 00:04:04.377 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:04.377 CC module/accel/iaa/accel_iaa.o 00:04:04.377 LIB libspdk_env_dpdk_rpc.a 00:04:04.377 SO libspdk_env_dpdk_rpc.so.5.0 00:04:04.377 LIB libspdk_scheduler_gscheduler.a 00:04:04.377 LIB libspdk_scheduler_dpdk_governor.a 00:04:04.377 SYMLINK libspdk_env_dpdk_rpc.so 00:04:04.377 CC module/accel/iaa/accel_iaa_rpc.o 00:04:04.377 SO libspdk_scheduler_gscheduler.so.3.0 00:04:04.377 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:04.377 CC module/accel/error/accel_error_rpc.o 00:04:04.377 CC module/accel/ioat/accel_ioat_rpc.o 00:04:04.377 LIB libspdk_scheduler_dynamic.a 00:04:04.636 SO libspdk_scheduler_dynamic.so.3.0 00:04:04.636 SYMLINK libspdk_scheduler_gscheduler.so 00:04:04.636 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:04.636 CC module/accel/dsa/accel_dsa_rpc.o 00:04:04.636 LIB libspdk_blob_bdev.a 00:04:04.636 SYMLINK libspdk_scheduler_dynamic.so 00:04:04.636 SO libspdk_blob_bdev.so.10.1 00:04:04.636 LIB libspdk_accel_iaa.a 00:04:04.636 LIB libspdk_accel_error.a 00:04:04.636 SO libspdk_accel_iaa.so.2.0 00:04:04.636 LIB libspdk_accel_ioat.a 00:04:04.636 SYMLINK libspdk_blob_bdev.so 00:04:04.636 SO libspdk_accel_error.so.1.0 00:04:04.636 LIB libspdk_accel_dsa.a 00:04:04.636 SO libspdk_accel_ioat.so.5.0 00:04:04.636 SYMLINK libspdk_accel_iaa.so 00:04:04.636 SO libspdk_accel_dsa.so.4.0 00:04:04.636 SYMLINK libspdk_accel_error.so 00:04:04.636 SYMLINK libspdk_accel_ioat.so 00:04:04.636 SYMLINK libspdk_accel_dsa.so 00:04:04.895 CC module/bdev/delay/vbdev_delay.o 00:04:04.895 CC module/bdev/malloc/bdev_malloc.o 00:04:04.895 CC module/bdev/error/vbdev_error.o 00:04:04.895 CC module/bdev/lvol/vbdev_lvol.o 00:04:04.895 CC module/bdev/null/bdev_null.o 00:04:04.895 CC module/bdev/gpt/gpt.o 00:04:04.895 CC module/bdev/nvme/bdev_nvme.o 00:04:04.895 CC module/blobfs/bdev/blobfs_bdev.o 00:04:04.895 CC module/bdev/passthru/vbdev_passthru.o 00:04:04.895 LIB libspdk_sock_posix.a 00:04:04.895 CC module/bdev/gpt/vbdev_gpt.o 00:04:04.895 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:04.895 SO libspdk_sock_posix.so.5.0 00:04:05.155 CC module/bdev/error/vbdev_error_rpc.o 00:04:05.155 SYMLINK libspdk_sock_posix.so 00:04:05.155 CC module/bdev/null/bdev_null_rpc.o 00:04:05.155 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:05.155 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.155 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:05.155 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:05.155 LIB libspdk_blobfs_bdev.a 00:04:05.155 SO libspdk_blobfs_bdev.so.5.0 00:04:05.155 LIB libspdk_bdev_error.a 00:04:05.155 LIB libspdk_bdev_gpt.a 00:04:05.155 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:05.155 LIB libspdk_bdev_null.a 00:04:05.155 SO libspdk_bdev_error.so.5.0 00:04:05.155 LIB libspdk_bdev_passthru.a 00:04:05.414 SO libspdk_bdev_gpt.so.5.0 00:04:05.414 SO libspdk_bdev_null.so.5.0 00:04:05.414 SO libspdk_bdev_passthru.so.5.0 00:04:05.414 LIB libspdk_bdev_delay.a 00:04:05.414 LIB libspdk_bdev_malloc.a 00:04:05.414 SYMLINK libspdk_blobfs_bdev.so 00:04:05.414 SYMLINK libspdk_bdev_gpt.so 00:04:05.414 SO libspdk_bdev_malloc.so.5.0 00:04:05.414 SYMLINK libspdk_bdev_error.so 00:04:05.414 SO libspdk_bdev_delay.so.5.0 00:04:05.414 SYMLINK libspdk_bdev_null.so 00:04:05.414 CC module/bdev/nvme/nvme_rpc.o 00:04:05.414 CC module/bdev/nvme/bdev_mdns_client.o 00:04:05.414 SYMLINK libspdk_bdev_passthru.so 00:04:05.414 SYMLINK libspdk_bdev_delay.so 00:04:05.414 CC module/bdev/nvme/vbdev_opal.o 00:04:05.414 SYMLINK libspdk_bdev_malloc.so 00:04:05.414 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:05.414 CC module/bdev/raid/bdev_raid.o 00:04:05.414 CC module/bdev/split/vbdev_split.o 00:04:05.414 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:05.414 LIB libspdk_bdev_lvol.a 00:04:05.673 SO libspdk_bdev_lvol.so.5.0 00:04:05.673 CC module/bdev/raid/bdev_raid_rpc.o 00:04:05.673 CC module/bdev/raid/bdev_raid_sb.o 00:04:05.673 SYMLINK libspdk_bdev_lvol.so 00:04:05.673 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:05.673 CC module/bdev/split/vbdev_split_rpc.o 00:04:05.673 CC module/bdev/raid/raid0.o 00:04:05.673 CC module/bdev/aio/bdev_aio.o 00:04:05.673 CC module/bdev/ftl/bdev_ftl.o 00:04:05.673 LIB libspdk_bdev_split.a 00:04:05.673 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:05.673 CC module/bdev/raid/raid1.o 00:04:05.673 CC module/bdev/raid/concat.o 00:04:05.933 SO libspdk_bdev_split.so.5.0 00:04:05.933 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:05.933 SYMLINK libspdk_bdev_split.so 00:04:05.933 CC module/bdev/aio/bdev_aio_rpc.o 00:04:05.933 LIB libspdk_bdev_zone_block.a 00:04:05.933 CC module/bdev/iscsi/bdev_iscsi.o 00:04:05.933 SO libspdk_bdev_zone_block.so.5.0 00:04:05.933 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:05.933 SYMLINK libspdk_bdev_zone_block.so 00:04:05.933 LIB libspdk_bdev_aio.a 00:04:05.933 LIB libspdk_bdev_ftl.a 00:04:06.192 SO libspdk_bdev_aio.so.5.0 00:04:06.192 SO libspdk_bdev_ftl.so.5.0 00:04:06.192 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:06.192 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:06.192 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:06.192 SYMLINK libspdk_bdev_aio.so 00:04:06.192 SYMLINK libspdk_bdev_ftl.so 00:04:06.192 LIB libspdk_bdev_raid.a 00:04:06.192 SO libspdk_bdev_raid.so.5.0 00:04:06.192 LIB libspdk_bdev_iscsi.a 00:04:06.451 SO libspdk_bdev_iscsi.so.5.0 00:04:06.451 SYMLINK libspdk_bdev_raid.so 00:04:06.451 SYMLINK libspdk_bdev_iscsi.so 00:04:06.451 LIB libspdk_bdev_virtio.a 00:04:06.451 SO libspdk_bdev_virtio.so.5.0 00:04:06.710 SYMLINK libspdk_bdev_virtio.so 00:04:06.710 LIB libspdk_bdev_nvme.a 00:04:06.968 SO libspdk_bdev_nvme.so.6.0 00:04:06.968 SYMLINK libspdk_bdev_nvme.so 00:04:07.227 CC module/event/subsystems/vmd/vmd.o 00:04:07.227 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:07.227 CC module/event/subsystems/iobuf/iobuf.o 00:04:07.227 CC module/event/subsystems/scheduler/scheduler.o 00:04:07.227 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:07.227 CC module/event/subsystems/sock/sock.o 00:04:07.227 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:07.486 LIB libspdk_event_sock.a 00:04:07.487 LIB libspdk_event_scheduler.a 00:04:07.487 LIB libspdk_event_vmd.a 00:04:07.487 LIB libspdk_event_vhost_blk.a 00:04:07.487 SO libspdk_event_vhost_blk.so.2.0 00:04:07.487 SO libspdk_event_scheduler.so.3.0 00:04:07.487 SO libspdk_event_sock.so.4.0 00:04:07.487 SO libspdk_event_vmd.so.5.0 00:04:07.487 LIB libspdk_event_iobuf.a 00:04:07.487 SYMLINK libspdk_event_sock.so 00:04:07.487 SYMLINK libspdk_event_scheduler.so 00:04:07.487 SYMLINK libspdk_event_vmd.so 00:04:07.487 SYMLINK libspdk_event_vhost_blk.so 00:04:07.487 SO libspdk_event_iobuf.so.2.0 00:04:07.487 SYMLINK libspdk_event_iobuf.so 00:04:07.745 CC module/event/subsystems/accel/accel.o 00:04:07.745 LIB libspdk_event_accel.a 00:04:08.005 SO libspdk_event_accel.so.5.0 00:04:08.005 SYMLINK libspdk_event_accel.so 00:04:08.367 CC module/event/subsystems/bdev/bdev.o 00:04:08.367 LIB libspdk_event_bdev.a 00:04:08.367 SO libspdk_event_bdev.so.5.0 00:04:08.367 SYMLINK libspdk_event_bdev.so 00:04:08.636 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:08.636 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:08.636 CC module/event/subsystems/nbd/nbd.o 00:04:08.636 CC module/event/subsystems/scsi/scsi.o 00:04:08.636 CC module/event/subsystems/ublk/ublk.o 00:04:08.895 LIB libspdk_event_nbd.a 00:04:08.895 LIB libspdk_event_ublk.a 00:04:08.895 LIB libspdk_event_scsi.a 00:04:08.895 SO libspdk_event_nbd.so.5.0 00:04:08.895 SO libspdk_event_ublk.so.2.0 00:04:08.895 SO libspdk_event_scsi.so.5.0 00:04:08.895 SYMLINK libspdk_event_nbd.so 00:04:08.895 SYMLINK libspdk_event_ublk.so 00:04:08.895 LIB libspdk_event_nvmf.a 00:04:08.895 SYMLINK libspdk_event_scsi.so 00:04:08.895 SO libspdk_event_nvmf.so.5.0 00:04:08.895 SYMLINK libspdk_event_nvmf.so 00:04:09.154 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.154 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.154 LIB libspdk_event_vhost_scsi.a 00:04:09.154 SO libspdk_event_vhost_scsi.so.2.0 00:04:09.154 LIB libspdk_event_iscsi.a 00:04:09.154 SYMLINK libspdk_event_vhost_scsi.so 00:04:09.413 SO libspdk_event_iscsi.so.5.0 00:04:09.413 SYMLINK libspdk_event_iscsi.so 00:04:09.413 SO libspdk.so.5.0 00:04:09.413 SYMLINK libspdk.so 00:04:09.671 CC app/trace_record/trace_record.o 00:04:09.671 CC app/spdk_nvme_identify/identify.o 00:04:09.671 CXX app/trace/trace.o 00:04:09.671 CC app/spdk_lspci/spdk_lspci.o 00:04:09.671 CC app/spdk_nvme_perf/perf.o 00:04:09.671 CC app/nvmf_tgt/nvmf_main.o 00:04:09.671 CC app/iscsi_tgt/iscsi_tgt.o 00:04:09.671 CC app/spdk_tgt/spdk_tgt.o 00:04:09.671 CC examples/accel/perf/accel_perf.o 00:04:09.671 CC test/accel/dif/dif.o 00:04:09.930 LINK spdk_lspci 00:04:09.930 LINK nvmf_tgt 00:04:09.930 LINK spdk_trace_record 00:04:09.930 LINK spdk_tgt 00:04:09.930 LINK iscsi_tgt 00:04:09.930 CC app/spdk_nvme_discover/discovery_aer.o 00:04:10.190 LINK spdk_trace 00:04:10.190 CC app/spdk_top/spdk_top.o 00:04:10.190 LINK accel_perf 00:04:10.190 LINK dif 00:04:10.190 LINK spdk_nvme_discover 00:04:10.190 CC test/app/bdev_svc/bdev_svc.o 00:04:10.190 CC test/bdev/bdevio/bdevio.o 00:04:10.190 CC test/blobfs/mkfs/mkfs.o 00:04:10.448 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:10.448 TEST_HEADER include/spdk/accel.h 00:04:10.448 TEST_HEADER include/spdk/accel_module.h 00:04:10.448 TEST_HEADER include/spdk/assert.h 00:04:10.448 TEST_HEADER include/spdk/barrier.h 00:04:10.448 TEST_HEADER include/spdk/base64.h 00:04:10.448 TEST_HEADER include/spdk/bdev.h 00:04:10.448 TEST_HEADER include/spdk/bdev_module.h 00:04:10.448 TEST_HEADER include/spdk/bdev_zone.h 00:04:10.448 TEST_HEADER include/spdk/bit_array.h 00:04:10.448 TEST_HEADER include/spdk/bit_pool.h 00:04:10.448 LINK spdk_nvme_identify 00:04:10.448 TEST_HEADER include/spdk/blob_bdev.h 00:04:10.448 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:10.448 TEST_HEADER include/spdk/blobfs.h 00:04:10.448 TEST_HEADER include/spdk/blob.h 00:04:10.448 TEST_HEADER include/spdk/conf.h 00:04:10.448 TEST_HEADER include/spdk/config.h 00:04:10.448 TEST_HEADER include/spdk/cpuset.h 00:04:10.448 LINK spdk_nvme_perf 00:04:10.448 TEST_HEADER include/spdk/crc16.h 00:04:10.448 TEST_HEADER include/spdk/crc32.h 00:04:10.448 TEST_HEADER include/spdk/crc64.h 00:04:10.448 TEST_HEADER include/spdk/dif.h 00:04:10.448 LINK bdev_svc 00:04:10.448 TEST_HEADER include/spdk/dma.h 00:04:10.448 TEST_HEADER include/spdk/endian.h 00:04:10.448 TEST_HEADER include/spdk/env_dpdk.h 00:04:10.448 TEST_HEADER include/spdk/env.h 00:04:10.448 TEST_HEADER include/spdk/event.h 00:04:10.448 TEST_HEADER include/spdk/fd_group.h 00:04:10.448 TEST_HEADER include/spdk/fd.h 00:04:10.448 TEST_HEADER include/spdk/file.h 00:04:10.448 TEST_HEADER include/spdk/ftl.h 00:04:10.448 CC examples/bdev/hello_world/hello_bdev.o 00:04:10.448 TEST_HEADER include/spdk/gpt_spec.h 00:04:10.448 TEST_HEADER include/spdk/hexlify.h 00:04:10.448 TEST_HEADER include/spdk/histogram_data.h 00:04:10.448 TEST_HEADER include/spdk/idxd.h 00:04:10.448 TEST_HEADER include/spdk/idxd_spec.h 00:04:10.448 TEST_HEADER include/spdk/init.h 00:04:10.448 TEST_HEADER include/spdk/ioat.h 00:04:10.448 TEST_HEADER include/spdk/ioat_spec.h 00:04:10.448 TEST_HEADER include/spdk/iscsi_spec.h 00:04:10.448 TEST_HEADER include/spdk/json.h 00:04:10.448 LINK mkfs 00:04:10.448 TEST_HEADER include/spdk/jsonrpc.h 00:04:10.448 TEST_HEADER include/spdk/likely.h 00:04:10.448 TEST_HEADER include/spdk/log.h 00:04:10.448 TEST_HEADER include/spdk/lvol.h 00:04:10.448 TEST_HEADER include/spdk/memory.h 00:04:10.448 TEST_HEADER include/spdk/mmio.h 00:04:10.448 TEST_HEADER include/spdk/nbd.h 00:04:10.448 CC test/dma/test_dma/test_dma.o 00:04:10.448 TEST_HEADER include/spdk/notify.h 00:04:10.448 TEST_HEADER include/spdk/nvme.h 00:04:10.448 TEST_HEADER include/spdk/nvme_intel.h 00:04:10.448 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:10.448 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:10.448 TEST_HEADER include/spdk/nvme_spec.h 00:04:10.448 TEST_HEADER include/spdk/nvme_zns.h 00:04:10.448 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:10.448 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:10.448 TEST_HEADER include/spdk/nvmf.h 00:04:10.448 TEST_HEADER include/spdk/nvmf_spec.h 00:04:10.448 TEST_HEADER include/spdk/nvmf_transport.h 00:04:10.448 TEST_HEADER include/spdk/opal.h 00:04:10.448 TEST_HEADER include/spdk/opal_spec.h 00:04:10.448 TEST_HEADER include/spdk/pci_ids.h 00:04:10.448 TEST_HEADER include/spdk/pipe.h 00:04:10.448 TEST_HEADER include/spdk/queue.h 00:04:10.448 TEST_HEADER include/spdk/reduce.h 00:04:10.448 TEST_HEADER include/spdk/rpc.h 00:04:10.448 TEST_HEADER include/spdk/scheduler.h 00:04:10.448 TEST_HEADER include/spdk/scsi.h 00:04:10.448 TEST_HEADER include/spdk/scsi_spec.h 00:04:10.448 TEST_HEADER include/spdk/sock.h 00:04:10.448 TEST_HEADER include/spdk/stdinc.h 00:04:10.448 TEST_HEADER include/spdk/string.h 00:04:10.448 TEST_HEADER include/spdk/thread.h 00:04:10.448 TEST_HEADER include/spdk/trace.h 00:04:10.448 TEST_HEADER include/spdk/trace_parser.h 00:04:10.448 TEST_HEADER include/spdk/tree.h 00:04:10.708 TEST_HEADER include/spdk/ublk.h 00:04:10.708 TEST_HEADER include/spdk/util.h 00:04:10.708 TEST_HEADER include/spdk/uuid.h 00:04:10.708 TEST_HEADER include/spdk/version.h 00:04:10.708 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:10.708 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:10.708 TEST_HEADER include/spdk/vhost.h 00:04:10.708 TEST_HEADER include/spdk/vmd.h 00:04:10.708 TEST_HEADER include/spdk/xor.h 00:04:10.708 TEST_HEADER include/spdk/zipf.h 00:04:10.708 CXX test/cpp_headers/accel.o 00:04:10.708 LINK bdevio 00:04:10.708 CC test/app/histogram_perf/histogram_perf.o 00:04:10.708 CC test/app/jsoncat/jsoncat.o 00:04:10.708 LINK nvme_fuzz 00:04:10.708 LINK hello_bdev 00:04:10.708 CC examples/bdev/bdevperf/bdevperf.o 00:04:10.708 CC test/app/stub/stub.o 00:04:10.708 CXX test/cpp_headers/accel_module.o 00:04:10.708 LINK histogram_perf 00:04:10.708 LINK jsoncat 00:04:10.967 LINK test_dma 00:04:10.967 LINK spdk_top 00:04:10.967 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:10.967 LINK stub 00:04:10.967 CXX test/cpp_headers/assert.o 00:04:10.967 CC test/env/mem_callbacks/mem_callbacks.o 00:04:10.967 CC examples/blob/hello_world/hello_blob.o 00:04:10.967 CC examples/nvme/hello_world/hello_world.o 00:04:10.967 CC examples/ioat/perf/perf.o 00:04:10.967 CXX test/cpp_headers/barrier.o 00:04:11.225 CC app/spdk_dd/spdk_dd.o 00:04:11.225 CC app/vhost/vhost.o 00:04:11.225 CC examples/sock/hello_world/hello_sock.o 00:04:11.225 LINK hello_blob 00:04:11.225 LINK ioat_perf 00:04:11.225 CXX test/cpp_headers/base64.o 00:04:11.225 LINK hello_world 00:04:11.225 LINK vhost 00:04:11.483 LINK bdevperf 00:04:11.483 CXX test/cpp_headers/bdev.o 00:04:11.483 LINK hello_sock 00:04:11.483 CC examples/ioat/verify/verify.o 00:04:11.483 CC examples/nvme/reconnect/reconnect.o 00:04:11.483 LINK spdk_dd 00:04:11.483 CC examples/blob/cli/blobcli.o 00:04:11.483 CXX test/cpp_headers/bdev_module.o 00:04:11.483 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:11.483 CC test/env/vtophys/vtophys.o 00:04:11.483 CC examples/vmd/lsvmd/lsvmd.o 00:04:11.742 LINK mem_callbacks 00:04:11.742 LINK verify 00:04:11.742 LINK vtophys 00:04:11.742 CC app/fio/nvme/fio_plugin.o 00:04:11.742 LINK lsvmd 00:04:11.742 CXX test/cpp_headers/bdev_zone.o 00:04:11.742 LINK reconnect 00:04:11.742 CXX test/cpp_headers/bit_array.o 00:04:11.742 CC examples/nvme/arbitration/arbitration.o 00:04:12.001 LINK blobcli 00:04:12.001 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:12.001 CC examples/vmd/led/led.o 00:04:12.001 CXX test/cpp_headers/bit_pool.o 00:04:12.001 LINK nvme_manage 00:04:12.001 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:12.001 CC app/fio/bdev/fio_plugin.o 00:04:12.001 LINK arbitration 00:04:12.001 CXX test/cpp_headers/blob_bdev.o 00:04:12.001 LINK led 00:04:12.260 LINK env_dpdk_post_init 00:04:12.260 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:12.260 CC test/event/event_perf/event_perf.o 00:04:12.260 LINK spdk_nvme 00:04:12.260 CC examples/nvme/hotplug/hotplug.o 00:04:12.260 CXX test/cpp_headers/blobfs_bdev.o 00:04:12.260 CC test/env/memory/memory_ut.o 00:04:12.260 CC test/lvol/esnap/esnap.o 00:04:12.260 LINK iscsi_fuzz 00:04:12.260 CXX test/cpp_headers/blobfs.o 00:04:12.519 LINK event_perf 00:04:12.519 CC examples/nvmf/nvmf/nvmf.o 00:04:12.519 LINK spdk_bdev 00:04:12.519 CXX test/cpp_headers/blob.o 00:04:12.519 LINK hotplug 00:04:12.519 CXX test/cpp_headers/conf.o 00:04:12.519 LINK vhost_fuzz 00:04:12.519 CC test/event/reactor/reactor.o 00:04:12.519 CC examples/util/zipf/zipf.o 00:04:12.519 CXX test/cpp_headers/config.o 00:04:12.778 LINK nvmf 00:04:12.778 LINK reactor 00:04:12.778 CXX test/cpp_headers/cpuset.o 00:04:12.778 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:12.778 LINK zipf 00:04:12.778 CC examples/idxd/perf/perf.o 00:04:12.778 CC examples/thread/thread/thread_ex.o 00:04:12.778 CXX test/cpp_headers/crc16.o 00:04:12.778 CC test/event/reactor_perf/reactor_perf.o 00:04:12.778 CXX test/cpp_headers/crc32.o 00:04:12.778 CXX test/cpp_headers/crc64.o 00:04:12.778 LINK cmb_copy 00:04:13.037 LINK reactor_perf 00:04:13.037 CXX test/cpp_headers/dif.o 00:04:13.037 LINK thread 00:04:13.037 LINK idxd_perf 00:04:13.037 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:13.037 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:13.037 CC examples/nvme/abort/abort.o 00:04:13.037 LINK memory_ut 00:04:13.296 CXX test/cpp_headers/dma.o 00:04:13.296 CC test/event/app_repeat/app_repeat.o 00:04:13.296 LINK pmr_persistence 00:04:13.296 LINK interrupt_tgt 00:04:13.296 CXX test/cpp_headers/endian.o 00:04:13.296 CC test/env/pci/pci_ut.o 00:04:13.296 CC test/event/scheduler/scheduler.o 00:04:13.296 LINK app_repeat 00:04:13.296 CXX test/cpp_headers/env_dpdk.o 00:04:13.296 CXX test/cpp_headers/env.o 00:04:13.296 CXX test/cpp_headers/event.o 00:04:13.296 LINK abort 00:04:13.555 CXX test/cpp_headers/fd_group.o 00:04:13.555 CXX test/cpp_headers/fd.o 00:04:13.555 LINK scheduler 00:04:13.555 CXX test/cpp_headers/file.o 00:04:13.555 CXX test/cpp_headers/ftl.o 00:04:13.555 CC test/rpc_client/rpc_client_test.o 00:04:13.555 CC test/nvme/aer/aer.o 00:04:13.555 LINK pci_ut 00:04:13.555 CC test/thread/poller_perf/poller_perf.o 00:04:13.814 CC test/nvme/reset/reset.o 00:04:13.814 CXX test/cpp_headers/gpt_spec.o 00:04:13.814 CC test/nvme/sgl/sgl.o 00:04:13.814 LINK rpc_client_test 00:04:13.814 CC test/nvme/e2edp/nvme_dp.o 00:04:14.073 LINK poller_perf 00:04:14.073 LINK aer 00:04:14.073 CXX test/cpp_headers/hexlify.o 00:04:14.073 LINK reset 00:04:14.074 CC test/nvme/overhead/overhead.o 00:04:14.074 CC test/nvme/err_injection/err_injection.o 00:04:14.074 CXX test/cpp_headers/histogram_data.o 00:04:14.074 LINK sgl 00:04:14.074 CC test/nvme/startup/startup.o 00:04:14.074 CXX test/cpp_headers/idxd.o 00:04:14.074 CXX test/cpp_headers/idxd_spec.o 00:04:14.074 LINK nvme_dp 00:04:14.332 LINK err_injection 00:04:14.332 LINK startup 00:04:14.332 LINK overhead 00:04:14.332 CC test/nvme/simple_copy/simple_copy.o 00:04:14.332 CC test/nvme/reserve/reserve.o 00:04:14.332 CXX test/cpp_headers/init.o 00:04:14.332 CC test/nvme/connect_stress/connect_stress.o 00:04:14.332 CC test/nvme/boot_partition/boot_partition.o 00:04:14.591 CC test/nvme/fused_ordering/fused_ordering.o 00:04:14.591 CC test/nvme/compliance/nvme_compliance.o 00:04:14.591 CXX test/cpp_headers/ioat.o 00:04:14.591 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:14.591 LINK reserve 00:04:14.591 LINK boot_partition 00:04:14.591 LINK connect_stress 00:04:14.591 LINK simple_copy 00:04:14.591 CXX test/cpp_headers/ioat_spec.o 00:04:14.591 CXX test/cpp_headers/iscsi_spec.o 00:04:14.591 LINK fused_ordering 00:04:14.591 CXX test/cpp_headers/json.o 00:04:14.591 LINK doorbell_aers 00:04:14.850 CC test/nvme/fdp/fdp.o 00:04:14.850 CC test/nvme/cuse/cuse.o 00:04:14.850 LINK nvme_compliance 00:04:14.850 CXX test/cpp_headers/jsonrpc.o 00:04:14.850 CXX test/cpp_headers/likely.o 00:04:14.850 CXX test/cpp_headers/log.o 00:04:14.850 CXX test/cpp_headers/lvol.o 00:04:14.850 CXX test/cpp_headers/memory.o 00:04:14.850 CXX test/cpp_headers/mmio.o 00:04:14.850 CXX test/cpp_headers/nbd.o 00:04:14.850 CXX test/cpp_headers/notify.o 00:04:15.109 CXX test/cpp_headers/nvme_intel.o 00:04:15.109 CXX test/cpp_headers/nvme.o 00:04:15.109 CXX test/cpp_headers/nvme_ocssd.o 00:04:15.109 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:15.109 LINK fdp 00:04:15.109 CXX test/cpp_headers/nvme_spec.o 00:04:15.109 CXX test/cpp_headers/nvme_zns.o 00:04:15.109 CXX test/cpp_headers/nvmf_cmd.o 00:04:15.109 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:15.367 CXX test/cpp_headers/nvmf.o 00:04:15.368 CXX test/cpp_headers/nvmf_spec.o 00:04:15.368 CXX test/cpp_headers/nvmf_transport.o 00:04:15.368 CXX test/cpp_headers/opal.o 00:04:15.368 CXX test/cpp_headers/opal_spec.o 00:04:15.368 CXX test/cpp_headers/pci_ids.o 00:04:15.368 CXX test/cpp_headers/pipe.o 00:04:15.368 CXX test/cpp_headers/queue.o 00:04:15.626 CXX test/cpp_headers/reduce.o 00:04:15.626 CXX test/cpp_headers/rpc.o 00:04:15.626 CXX test/cpp_headers/scheduler.o 00:04:15.626 CXX test/cpp_headers/scsi.o 00:04:15.626 CXX test/cpp_headers/scsi_spec.o 00:04:15.626 CXX test/cpp_headers/sock.o 00:04:15.626 CXX test/cpp_headers/stdinc.o 00:04:15.626 CXX test/cpp_headers/string.o 00:04:15.627 CXX test/cpp_headers/thread.o 00:04:15.886 LINK cuse 00:04:15.886 CXX test/cpp_headers/trace.o 00:04:15.886 CXX test/cpp_headers/trace_parser.o 00:04:15.886 CXX test/cpp_headers/tree.o 00:04:15.886 CXX test/cpp_headers/ublk.o 00:04:15.886 CXX test/cpp_headers/util.o 00:04:15.886 CXX test/cpp_headers/uuid.o 00:04:15.886 CXX test/cpp_headers/version.o 00:04:15.886 CXX test/cpp_headers/vfio_user_pci.o 00:04:15.886 CXX test/cpp_headers/vfio_user_spec.o 00:04:15.886 CXX test/cpp_headers/vhost.o 00:04:15.886 CXX test/cpp_headers/vmd.o 00:04:15.886 CXX test/cpp_headers/xor.o 00:04:15.886 CXX test/cpp_headers/zipf.o 00:04:16.453 LINK esnap 00:04:18.987 00:04:18.987 real 0m49.090s 00:04:18.987 user 4m35.630s 00:04:18.987 sys 1m3.645s 00:04:18.987 14:50:42 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:18.987 14:50:42 -- common/autotest_common.sh@10 -- $ set +x 00:04:18.987 ************************************ 00:04:18.987 END TEST make 00:04:18.987 ************************************ 00:04:18.987 14:50:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:18.987 14:50:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:18.987 14:50:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:18.987 14:50:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:18.987 14:50:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:18.987 14:50:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:18.987 14:50:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:18.987 14:50:42 -- scripts/common.sh@335 -- # IFS=.-: 00:04:18.987 14:50:42 -- scripts/common.sh@335 -- # read -ra ver1 00:04:18.987 14:50:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.987 14:50:42 -- scripts/common.sh@336 -- # read -ra ver2 00:04:18.987 14:50:42 -- scripts/common.sh@337 -- # local 'op=<' 00:04:18.987 14:50:42 -- scripts/common.sh@339 -- # ver1_l=2 00:04:18.987 14:50:42 -- scripts/common.sh@340 -- # ver2_l=1 00:04:18.987 14:50:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:19.246 14:50:42 -- scripts/common.sh@343 -- # case "$op" in 00:04:19.246 14:50:42 -- scripts/common.sh@344 -- # : 1 00:04:19.246 14:50:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:19.246 14:50:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.246 14:50:42 -- scripts/common.sh@364 -- # decimal 1 00:04:19.246 14:50:42 -- scripts/common.sh@352 -- # local d=1 00:04:19.246 14:50:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.246 14:50:42 -- scripts/common.sh@354 -- # echo 1 00:04:19.246 14:50:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:19.246 14:50:42 -- scripts/common.sh@365 -- # decimal 2 00:04:19.246 14:50:42 -- scripts/common.sh@352 -- # local d=2 00:04:19.246 14:50:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.246 14:50:42 -- scripts/common.sh@354 -- # echo 2 00:04:19.246 14:50:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:19.246 14:50:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:19.246 14:50:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:19.246 14:50:42 -- scripts/common.sh@367 -- # return 0 00:04:19.246 14:50:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.246 14:50:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:19.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.246 --rc genhtml_branch_coverage=1 00:04:19.246 --rc genhtml_function_coverage=1 00:04:19.246 --rc genhtml_legend=1 00:04:19.246 --rc geninfo_all_blocks=1 00:04:19.246 --rc geninfo_unexecuted_blocks=1 00:04:19.246 00:04:19.246 ' 00:04:19.246 14:50:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:19.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.246 --rc genhtml_branch_coverage=1 00:04:19.246 --rc genhtml_function_coverage=1 00:04:19.246 --rc genhtml_legend=1 00:04:19.246 --rc geninfo_all_blocks=1 00:04:19.246 --rc geninfo_unexecuted_blocks=1 00:04:19.246 00:04:19.246 ' 00:04:19.246 14:50:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:19.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.246 --rc genhtml_branch_coverage=1 00:04:19.246 --rc genhtml_function_coverage=1 00:04:19.246 --rc genhtml_legend=1 00:04:19.246 --rc geninfo_all_blocks=1 00:04:19.246 --rc geninfo_unexecuted_blocks=1 00:04:19.246 00:04:19.246 ' 00:04:19.246 14:50:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:19.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.246 --rc genhtml_branch_coverage=1 00:04:19.246 --rc genhtml_function_coverage=1 00:04:19.246 --rc genhtml_legend=1 00:04:19.247 --rc geninfo_all_blocks=1 00:04:19.247 --rc geninfo_unexecuted_blocks=1 00:04:19.247 00:04:19.247 ' 00:04:19.247 14:50:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:19.247 14:50:42 -- nvmf/common.sh@7 -- # uname -s 00:04:19.247 14:50:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.247 14:50:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.247 14:50:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.247 14:50:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.247 14:50:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.247 14:50:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.247 14:50:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.247 14:50:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.247 14:50:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.247 14:50:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.247 14:50:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:04:19.247 14:50:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:04:19.247 14:50:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.247 14:50:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.247 14:50:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:19.247 14:50:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:19.247 14:50:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.247 14:50:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.247 14:50:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.247 14:50:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.247 14:50:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.247 14:50:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.247 14:50:42 -- paths/export.sh@5 -- # export PATH 00:04:19.247 14:50:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.247 14:50:42 -- nvmf/common.sh@46 -- # : 0 00:04:19.247 14:50:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:19.247 14:50:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:19.247 14:50:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:19.247 14:50:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.247 14:50:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.247 14:50:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:19.247 14:50:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:19.247 14:50:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:19.247 14:50:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:19.247 14:50:42 -- spdk/autotest.sh@32 -- # uname -s 00:04:19.247 14:50:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:19.247 14:50:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:19.247 14:50:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:19.247 14:50:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:19.247 14:50:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:19.247 14:50:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:19.247 14:50:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:19.247 14:50:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:19.247 14:50:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:19.247 14:50:42 -- spdk/autotest.sh@48 -- # udevadm_pid=61845 00:04:19.247 14:50:42 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:19.247 14:50:42 -- spdk/autotest.sh@54 -- # echo 61847 00:04:19.247 14:50:42 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:19.247 14:50:42 -- spdk/autotest.sh@56 -- # echo 61852 00:04:19.247 14:50:42 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:19.247 14:50:42 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:19.247 14:50:42 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:19.247 14:50:42 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:19.247 14:50:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.247 14:50:42 -- common/autotest_common.sh@10 -- # set +x 00:04:19.247 14:50:42 -- spdk/autotest.sh@70 -- # create_test_list 00:04:19.247 14:50:42 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:19.247 14:50:42 -- common/autotest_common.sh@10 -- # set +x 00:04:19.247 14:50:42 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:19.247 14:50:42 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:19.247 14:50:42 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:19.247 14:50:42 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:19.247 14:50:42 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:19.247 14:50:42 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:19.247 14:50:42 -- common/autotest_common.sh@1450 -- # uname 00:04:19.247 14:50:42 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:19.247 14:50:42 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:19.247 14:50:42 -- common/autotest_common.sh@1470 -- # uname 00:04:19.247 14:50:42 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:19.247 14:50:42 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:19.247 14:50:42 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:19.505 lcov: LCOV version 1.15 00:04:19.505 14:50:42 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:27.617 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:27.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:27.617 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:27.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:27.617 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:27.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:45.707 14:51:07 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:45.707 14:51:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.707 14:51:07 -- common/autotest_common.sh@10 -- # set +x 00:04:45.707 14:51:07 -- spdk/autotest.sh@89 -- # rm -f 00:04:45.707 14:51:07 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.707 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:45.707 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:45.707 14:51:08 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:45.707 14:51:08 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:45.707 14:51:08 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:45.707 14:51:08 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:45.707 14:51:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:45.707 14:51:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:45.707 14:51:08 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:45.707 14:51:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:45.707 14:51:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:45.707 14:51:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:45.707 14:51:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:45.707 14:51:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:45.707 14:51:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:45.707 14:51:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:45.707 14:51:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:45.707 14:51:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:45.707 14:51:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:45.707 14:51:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:45.707 14:51:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:45.707 14:51:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:45.707 14:51:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:45.707 14:51:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:45.707 14:51:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:45.707 14:51:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:45.707 14:51:08 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:45.707 14:51:08 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:45.707 14:51:08 -- spdk/autotest.sh@108 -- # grep -v p 00:04:45.707 14:51:08 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:45.707 14:51:08 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:45.707 14:51:08 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:45.707 14:51:08 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:45.707 14:51:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:45.707 No valid GPT data, bailing 00:04:45.707 14:51:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:45.707 14:51:08 -- scripts/common.sh@393 -- # pt= 00:04:45.707 14:51:08 -- scripts/common.sh@394 -- # return 1 00:04:45.707 14:51:08 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:45.707 1+0 records in 00:04:45.707 1+0 records out 00:04:45.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049444 s, 212 MB/s 00:04:45.707 14:51:08 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:45.707 14:51:08 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:45.707 14:51:08 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:45.707 14:51:08 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:45.707 14:51:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:45.707 No valid GPT data, bailing 00:04:45.707 14:51:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:45.707 14:51:08 -- scripts/common.sh@393 -- # pt= 00:04:45.707 14:51:08 -- scripts/common.sh@394 -- # return 1 00:04:45.707 14:51:08 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:45.707 1+0 records in 00:04:45.707 1+0 records out 00:04:45.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491136 s, 214 MB/s 00:04:45.707 14:51:08 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:45.707 14:51:08 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:45.707 14:51:08 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:45.707 14:51:08 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:45.707 14:51:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:45.707 No valid GPT data, bailing 00:04:45.707 14:51:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:45.707 14:51:08 -- scripts/common.sh@393 -- # pt= 00:04:45.707 14:51:08 -- scripts/common.sh@394 -- # return 1 00:04:45.707 14:51:08 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:45.707 1+0 records in 00:04:45.707 1+0 records out 00:04:45.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488469 s, 215 MB/s 00:04:45.707 14:51:08 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:45.707 14:51:08 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:45.707 14:51:08 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:45.707 14:51:08 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:45.707 14:51:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:45.707 No valid GPT data, bailing 00:04:45.707 14:51:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:45.707 14:51:08 -- scripts/common.sh@393 -- # pt= 00:04:45.707 14:51:08 -- scripts/common.sh@394 -- # return 1 00:04:45.707 14:51:08 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:45.707 1+0 records in 00:04:45.707 1+0 records out 00:04:45.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485114 s, 216 MB/s 00:04:45.707 14:51:08 -- spdk/autotest.sh@116 -- # sync 00:04:45.707 14:51:08 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:45.707 14:51:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:45.707 14:51:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:47.086 14:51:10 -- spdk/autotest.sh@122 -- # uname -s 00:04:47.086 14:51:10 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:47.086 14:51:10 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:47.086 14:51:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.086 14:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.086 14:51:10 -- common/autotest_common.sh@10 -- # set +x 00:04:47.086 ************************************ 00:04:47.086 START TEST setup.sh 00:04:47.086 ************************************ 00:04:47.086 14:51:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:47.346 * Looking for test storage... 00:04:47.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.346 14:51:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:47.346 14:51:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:47.346 14:51:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:47.346 14:51:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:47.346 14:51:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:47.346 14:51:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:47.346 14:51:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:47.346 14:51:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:47.346 14:51:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:47.346 14:51:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.346 14:51:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:47.346 14:51:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:47.346 14:51:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:47.346 14:51:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:47.346 14:51:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:47.346 14:51:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:47.346 14:51:10 -- scripts/common.sh@344 -- # : 1 00:04:47.346 14:51:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:47.346 14:51:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.346 14:51:10 -- scripts/common.sh@364 -- # decimal 1 00:04:47.346 14:51:10 -- scripts/common.sh@352 -- # local d=1 00:04:47.346 14:51:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.346 14:51:10 -- scripts/common.sh@354 -- # echo 1 00:04:47.346 14:51:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:47.346 14:51:10 -- scripts/common.sh@365 -- # decimal 2 00:04:47.346 14:51:10 -- scripts/common.sh@352 -- # local d=2 00:04:47.346 14:51:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.346 14:51:10 -- scripts/common.sh@354 -- # echo 2 00:04:47.346 14:51:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:47.346 14:51:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:47.346 14:51:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:47.346 14:51:10 -- scripts/common.sh@367 -- # return 0 00:04:47.346 14:51:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.346 14:51:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:47.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.346 --rc genhtml_branch_coverage=1 00:04:47.346 --rc genhtml_function_coverage=1 00:04:47.346 --rc genhtml_legend=1 00:04:47.346 --rc geninfo_all_blocks=1 00:04:47.346 --rc geninfo_unexecuted_blocks=1 00:04:47.346 00:04:47.346 ' 00:04:47.346 14:51:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:47.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.346 --rc genhtml_branch_coverage=1 00:04:47.346 --rc genhtml_function_coverage=1 00:04:47.346 --rc genhtml_legend=1 00:04:47.346 --rc geninfo_all_blocks=1 00:04:47.346 --rc geninfo_unexecuted_blocks=1 00:04:47.346 00:04:47.346 ' 00:04:47.346 14:51:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:47.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.346 --rc genhtml_branch_coverage=1 00:04:47.346 --rc genhtml_function_coverage=1 00:04:47.346 --rc genhtml_legend=1 00:04:47.346 --rc geninfo_all_blocks=1 00:04:47.346 --rc geninfo_unexecuted_blocks=1 00:04:47.346 00:04:47.346 ' 00:04:47.346 14:51:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:47.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.346 --rc genhtml_branch_coverage=1 00:04:47.346 --rc genhtml_function_coverage=1 00:04:47.346 --rc genhtml_legend=1 00:04:47.346 --rc geninfo_all_blocks=1 00:04:47.346 --rc geninfo_unexecuted_blocks=1 00:04:47.346 00:04:47.346 ' 00:04:47.346 14:51:10 -- setup/test-setup.sh@10 -- # uname -s 00:04:47.346 14:51:10 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:47.346 14:51:10 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:47.346 14:51:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.346 14:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.346 14:51:10 -- common/autotest_common.sh@10 -- # set +x 00:04:47.346 ************************************ 00:04:47.346 START TEST acl 00:04:47.346 ************************************ 00:04:47.346 14:51:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:47.346 * Looking for test storage... 00:04:47.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.606 14:51:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:47.606 14:51:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:47.606 14:51:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:47.606 14:51:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:47.606 14:51:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:47.606 14:51:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:47.606 14:51:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:47.606 14:51:11 -- scripts/common.sh@335 -- # IFS=.-: 00:04:47.606 14:51:11 -- scripts/common.sh@335 -- # read -ra ver1 00:04:47.606 14:51:11 -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.606 14:51:11 -- scripts/common.sh@336 -- # read -ra ver2 00:04:47.606 14:51:11 -- scripts/common.sh@337 -- # local 'op=<' 00:04:47.606 14:51:11 -- scripts/common.sh@339 -- # ver1_l=2 00:04:47.606 14:51:11 -- scripts/common.sh@340 -- # ver2_l=1 00:04:47.606 14:51:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:47.606 14:51:11 -- scripts/common.sh@343 -- # case "$op" in 00:04:47.606 14:51:11 -- scripts/common.sh@344 -- # : 1 00:04:47.606 14:51:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:47.606 14:51:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.606 14:51:11 -- scripts/common.sh@364 -- # decimal 1 00:04:47.606 14:51:11 -- scripts/common.sh@352 -- # local d=1 00:04:47.606 14:51:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.606 14:51:11 -- scripts/common.sh@354 -- # echo 1 00:04:47.606 14:51:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:47.606 14:51:11 -- scripts/common.sh@365 -- # decimal 2 00:04:47.606 14:51:11 -- scripts/common.sh@352 -- # local d=2 00:04:47.606 14:51:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.606 14:51:11 -- scripts/common.sh@354 -- # echo 2 00:04:47.606 14:51:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:47.606 14:51:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:47.606 14:51:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:47.606 14:51:11 -- scripts/common.sh@367 -- # return 0 00:04:47.606 14:51:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.606 14:51:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:47.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.606 --rc genhtml_branch_coverage=1 00:04:47.606 --rc genhtml_function_coverage=1 00:04:47.606 --rc genhtml_legend=1 00:04:47.606 --rc geninfo_all_blocks=1 00:04:47.606 --rc geninfo_unexecuted_blocks=1 00:04:47.606 00:04:47.606 ' 00:04:47.606 14:51:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:47.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.606 --rc genhtml_branch_coverage=1 00:04:47.606 --rc genhtml_function_coverage=1 00:04:47.606 --rc genhtml_legend=1 00:04:47.606 --rc geninfo_all_blocks=1 00:04:47.606 --rc geninfo_unexecuted_blocks=1 00:04:47.606 00:04:47.606 ' 00:04:47.606 14:51:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:47.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.606 --rc genhtml_branch_coverage=1 00:04:47.606 --rc genhtml_function_coverage=1 00:04:47.606 --rc genhtml_legend=1 00:04:47.606 --rc geninfo_all_blocks=1 00:04:47.606 --rc geninfo_unexecuted_blocks=1 00:04:47.606 00:04:47.606 ' 00:04:47.606 14:51:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:47.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.606 --rc genhtml_branch_coverage=1 00:04:47.606 --rc genhtml_function_coverage=1 00:04:47.606 --rc genhtml_legend=1 00:04:47.606 --rc geninfo_all_blocks=1 00:04:47.606 --rc geninfo_unexecuted_blocks=1 00:04:47.606 00:04:47.606 ' 00:04:47.606 14:51:11 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:47.606 14:51:11 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:47.606 14:51:11 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:47.606 14:51:11 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:47.606 14:51:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:47.606 14:51:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:47.606 14:51:11 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:47.606 14:51:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.606 14:51:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:47.606 14:51:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:47.606 14:51:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:47.606 14:51:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:47.606 14:51:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:47.606 14:51:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:47.606 14:51:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:47.606 14:51:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:47.606 14:51:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:47.606 14:51:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:47.606 14:51:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:47.606 14:51:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:47.606 14:51:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:47.606 14:51:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:47.606 14:51:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:47.606 14:51:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:47.606 14:51:11 -- setup/acl.sh@12 -- # devs=() 00:04:47.606 14:51:11 -- setup/acl.sh@12 -- # declare -a devs 00:04:47.606 14:51:11 -- setup/acl.sh@13 -- # drivers=() 00:04:47.606 14:51:11 -- setup/acl.sh@13 -- # declare -A drivers 00:04:47.606 14:51:11 -- setup/acl.sh@51 -- # setup reset 00:04:47.606 14:51:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.606 14:51:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.543 14:51:11 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:48.543 14:51:11 -- setup/acl.sh@16 -- # local dev driver 00:04:48.543 14:51:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.543 14:51:11 -- setup/acl.sh@15 -- # setup output status 00:04:48.543 14:51:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.543 14:51:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:48.543 Hugepages 00:04:48.543 node hugesize free / total 00:04:48.543 14:51:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:48.543 14:51:11 -- setup/acl.sh@19 -- # continue 00:04:48.543 14:51:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.543 00:04:48.543 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:48.543 14:51:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:48.543 14:51:11 -- setup/acl.sh@19 -- # continue 00:04:48.543 14:51:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.543 14:51:12 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:48.544 14:51:12 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:48.544 14:51:12 -- setup/acl.sh@20 -- # continue 00:04:48.544 14:51:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.544 14:51:12 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:48.544 14:51:12 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:48.544 14:51:12 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:48.544 14:51:12 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:48.544 14:51:12 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:48.544 14:51:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.803 14:51:12 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:48.803 14:51:12 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:48.803 14:51:12 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:48.803 14:51:12 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:48.803 14:51:12 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:48.803 14:51:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.803 14:51:12 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:48.803 14:51:12 -- setup/acl.sh@54 -- # run_test denied denied 00:04:48.803 14:51:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.803 14:51:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.803 14:51:12 -- common/autotest_common.sh@10 -- # set +x 00:04:48.803 ************************************ 00:04:48.803 START TEST denied 00:04:48.803 ************************************ 00:04:48.803 14:51:12 -- common/autotest_common.sh@1114 -- # denied 00:04:48.803 14:51:12 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:48.803 14:51:12 -- setup/acl.sh@38 -- # setup output config 00:04:48.803 14:51:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.803 14:51:12 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:48.803 14:51:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.740 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:49.740 14:51:13 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:49.740 14:51:13 -- setup/acl.sh@28 -- # local dev driver 00:04:49.740 14:51:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:49.740 14:51:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:49.740 14:51:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:49.740 14:51:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:49.740 14:51:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:49.740 14:51:13 -- setup/acl.sh@41 -- # setup reset 00:04:49.740 14:51:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.740 14:51:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.308 ************************************ 00:04:50.308 END TEST denied 00:04:50.308 ************************************ 00:04:50.308 00:04:50.308 real 0m1.539s 00:04:50.308 user 0m0.628s 00:04:50.308 sys 0m0.881s 00:04:50.308 14:51:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.308 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:04:50.308 14:51:13 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:50.308 14:51:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.308 14:51:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.308 14:51:13 -- common/autotest_common.sh@10 -- # set +x 00:04:50.308 ************************************ 00:04:50.308 START TEST allowed 00:04:50.308 ************************************ 00:04:50.308 14:51:13 -- common/autotest_common.sh@1114 -- # allowed 00:04:50.308 14:51:13 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:50.308 14:51:13 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:50.308 14:51:13 -- setup/acl.sh@45 -- # setup output config 00:04:50.308 14:51:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.308 14:51:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.244 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.244 14:51:14 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:51.244 14:51:14 -- setup/acl.sh@28 -- # local dev driver 00:04:51.244 14:51:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:51.244 14:51:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:51.244 14:51:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:51.244 14:51:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:51.244 14:51:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:51.244 14:51:14 -- setup/acl.sh@48 -- # setup reset 00:04:51.244 14:51:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.244 14:51:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.181 00:04:52.181 real 0m1.634s 00:04:52.181 user 0m0.700s 00:04:52.181 sys 0m0.926s 00:04:52.181 14:51:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.181 14:51:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.181 ************************************ 00:04:52.181 END TEST allowed 00:04:52.181 ************************************ 00:04:52.181 ************************************ 00:04:52.181 END TEST acl 00:04:52.181 ************************************ 00:04:52.181 00:04:52.181 real 0m4.615s 00:04:52.181 user 0m2.045s 00:04:52.181 sys 0m2.563s 00:04:52.181 14:51:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.181 14:51:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.181 14:51:15 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:52.181 14:51:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.181 14:51:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.182 14:51:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.182 ************************************ 00:04:52.182 START TEST hugepages 00:04:52.182 ************************************ 00:04:52.182 14:51:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:52.182 * Looking for test storage... 00:04:52.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:52.182 14:51:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.182 14:51:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.182 14:51:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.182 14:51:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.182 14:51:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.182 14:51:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.182 14:51:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.182 14:51:15 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.182 14:51:15 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.182 14:51:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.182 14:51:15 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.182 14:51:15 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.182 14:51:15 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.182 14:51:15 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.182 14:51:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.182 14:51:15 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.182 14:51:15 -- scripts/common.sh@344 -- # : 1 00:04:52.182 14:51:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.182 14:51:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.182 14:51:15 -- scripts/common.sh@364 -- # decimal 1 00:04:52.182 14:51:15 -- scripts/common.sh@352 -- # local d=1 00:04:52.182 14:51:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.182 14:51:15 -- scripts/common.sh@354 -- # echo 1 00:04:52.182 14:51:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.182 14:51:15 -- scripts/common.sh@365 -- # decimal 2 00:04:52.182 14:51:15 -- scripts/common.sh@352 -- # local d=2 00:04:52.182 14:51:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.182 14:51:15 -- scripts/common.sh@354 -- # echo 2 00:04:52.182 14:51:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.182 14:51:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.182 14:51:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.182 14:51:15 -- scripts/common.sh@367 -- # return 0 00:04:52.182 14:51:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.182 14:51:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.182 --rc genhtml_branch_coverage=1 00:04:52.182 --rc genhtml_function_coverage=1 00:04:52.182 --rc genhtml_legend=1 00:04:52.182 --rc geninfo_all_blocks=1 00:04:52.182 --rc geninfo_unexecuted_blocks=1 00:04:52.182 00:04:52.182 ' 00:04:52.182 14:51:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.182 --rc genhtml_branch_coverage=1 00:04:52.182 --rc genhtml_function_coverage=1 00:04:52.182 --rc genhtml_legend=1 00:04:52.182 --rc geninfo_all_blocks=1 00:04:52.182 --rc geninfo_unexecuted_blocks=1 00:04:52.182 00:04:52.182 ' 00:04:52.182 14:51:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.182 --rc genhtml_branch_coverage=1 00:04:52.182 --rc genhtml_function_coverage=1 00:04:52.182 --rc genhtml_legend=1 00:04:52.182 --rc geninfo_all_blocks=1 00:04:52.182 --rc geninfo_unexecuted_blocks=1 00:04:52.182 00:04:52.182 ' 00:04:52.182 14:51:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.182 --rc genhtml_branch_coverage=1 00:04:52.182 --rc genhtml_function_coverage=1 00:04:52.182 --rc genhtml_legend=1 00:04:52.182 --rc geninfo_all_blocks=1 00:04:52.182 --rc geninfo_unexecuted_blocks=1 00:04:52.182 00:04:52.182 ' 00:04:52.182 14:51:15 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:52.182 14:51:15 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:52.182 14:51:15 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:52.182 14:51:15 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:52.182 14:51:15 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:52.182 14:51:15 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:52.182 14:51:15 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:52.182 14:51:15 -- setup/common.sh@18 -- # local node= 00:04:52.182 14:51:15 -- setup/common.sh@19 -- # local var val 00:04:52.182 14:51:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.182 14:51:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.182 14:51:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.182 14:51:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.182 14:51:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.182 14:51:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 4387788 kB' 'MemAvailable: 7317956 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 496060 kB' 'Inactive: 2753688 kB' 'Active(anon): 126876 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753688 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 118016 kB' 'Mapped: 51008 kB' 'Shmem: 10512 kB' 'KReclaimable: 88488 kB' 'Slab: 190068 kB' 'SReclaimable: 88488 kB' 'SUnreclaim: 101580 kB' 'KernelStack: 6800 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 330136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.182 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.182 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # continue 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.183 14:51:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.183 14:51:15 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:52.183 14:51:15 -- setup/common.sh@33 -- # echo 2048 00:04:52.183 14:51:15 -- setup/common.sh@33 -- # return 0 00:04:52.441 14:51:15 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:52.441 14:51:15 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:52.441 14:51:15 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:52.441 14:51:15 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:52.441 14:51:15 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:52.441 14:51:15 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:52.441 14:51:15 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:52.441 14:51:15 -- setup/hugepages.sh@207 -- # get_nodes 00:04:52.441 14:51:15 -- setup/hugepages.sh@27 -- # local node 00:04:52.441 14:51:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.441 14:51:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:52.441 14:51:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.441 14:51:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.441 14:51:15 -- setup/hugepages.sh@208 -- # clear_hp 00:04:52.441 14:51:15 -- setup/hugepages.sh@37 -- # local node hp 00:04:52.442 14:51:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.442 14:51:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.442 14:51:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:52.442 14:51:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.442 14:51:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:52.442 14:51:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:52.442 14:51:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:52.442 14:51:15 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:52.442 14:51:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.442 14:51:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.442 14:51:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.442 ************************************ 00:04:52.442 START TEST default_setup 00:04:52.442 ************************************ 00:04:52.442 14:51:15 -- common/autotest_common.sh@1114 -- # default_setup 00:04:52.442 14:51:15 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:52.442 14:51:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.442 14:51:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:52.442 14:51:15 -- setup/hugepages.sh@51 -- # shift 00:04:52.442 14:51:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:52.442 14:51:15 -- setup/hugepages.sh@52 -- # local node_ids 00:04:52.442 14:51:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.442 14:51:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.442 14:51:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:52.442 14:51:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:52.442 14:51:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.442 14:51:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.442 14:51:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.442 14:51:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.442 14:51:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.442 14:51:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:52.442 14:51:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.442 14:51:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:52.442 14:51:15 -- setup/hugepages.sh@73 -- # return 0 00:04:52.442 14:51:15 -- setup/hugepages.sh@137 -- # setup output 00:04:52.442 14:51:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.442 14:51:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.267 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.267 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.267 14:51:16 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:53.267 14:51:16 -- setup/hugepages.sh@89 -- # local node 00:04:53.267 14:51:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.267 14:51:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.267 14:51:16 -- setup/hugepages.sh@92 -- # local surp 00:04:53.267 14:51:16 -- setup/hugepages.sh@93 -- # local resv 00:04:53.267 14:51:16 -- setup/hugepages.sh@94 -- # local anon 00:04:53.267 14:51:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.267 14:51:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.267 14:51:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.267 14:51:16 -- setup/common.sh@18 -- # local node= 00:04:53.267 14:51:16 -- setup/common.sh@19 -- # local var val 00:04:53.267 14:51:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.267 14:51:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.267 14:51:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.267 14:51:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.267 14:51:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.267 14:51:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6493280 kB' 'MemAvailable: 9423304 kB' 'Buffers: 2684 kB' 'Cached: 3130692 kB' 'SwapCached: 0 kB' 'Active: 497820 kB' 'Inactive: 2753696 kB' 'Active(anon): 128636 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119760 kB' 'Mapped: 51156 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189752 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101572 kB' 'KernelStack: 6752 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.267 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.267 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.268 14:51:16 -- setup/common.sh@33 -- # echo 0 00:04:53.268 14:51:16 -- setup/common.sh@33 -- # return 0 00:04:53.268 14:51:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:53.268 14:51:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.268 14:51:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.268 14:51:16 -- setup/common.sh@18 -- # local node= 00:04:53.268 14:51:16 -- setup/common.sh@19 -- # local var val 00:04:53.268 14:51:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.268 14:51:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.268 14:51:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.268 14:51:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.268 14:51:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.268 14:51:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6493280 kB' 'MemAvailable: 9423304 kB' 'Buffers: 2684 kB' 'Cached: 3130692 kB' 'SwapCached: 0 kB' 'Active: 497516 kB' 'Inactive: 2753696 kB' 'Active(anon): 128332 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753696 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119420 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189744 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101564 kB' 'KernelStack: 6752 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.268 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.268 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.269 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.269 14:51:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.270 14:51:16 -- setup/common.sh@33 -- # echo 0 00:04:53.270 14:51:16 -- setup/common.sh@33 -- # return 0 00:04:53.270 14:51:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:53.270 14:51:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.270 14:51:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.270 14:51:16 -- setup/common.sh@18 -- # local node= 00:04:53.270 14:51:16 -- setup/common.sh@19 -- # local var val 00:04:53.270 14:51:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.270 14:51:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.270 14:51:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.270 14:51:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.270 14:51:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.270 14:51:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6493532 kB' 'MemAvailable: 9423560 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497096 kB' 'Inactive: 2753700 kB' 'Active(anon): 127912 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119028 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189740 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101560 kB' 'KernelStack: 6752 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.270 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.270 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.271 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.271 14:51:16 -- setup/common.sh@33 -- # echo 0 00:04:53.271 14:51:16 -- setup/common.sh@33 -- # return 0 00:04:53.271 nr_hugepages=1024 00:04:53.271 resv_hugepages=0 00:04:53.271 14:51:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:53.271 14:51:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.271 14:51:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.271 surplus_hugepages=0 00:04:53.271 14:51:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.271 anon_hugepages=0 00:04:53.271 14:51:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.271 14:51:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.271 14:51:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.271 14:51:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.271 14:51:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.271 14:51:16 -- setup/common.sh@18 -- # local node= 00:04:53.271 14:51:16 -- setup/common.sh@19 -- # local var val 00:04:53.271 14:51:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.271 14:51:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.271 14:51:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.271 14:51:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.271 14:51:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.271 14:51:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.271 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6493532 kB' 'MemAvailable: 9423560 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497232 kB' 'Inactive: 2753700 kB' 'Active(anon): 128048 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119184 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189720 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101540 kB' 'KernelStack: 6752 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.272 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.272 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.273 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.273 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.531 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.531 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.531 14:51:16 -- setup/common.sh@33 -- # echo 1024 00:04:53.531 14:51:16 -- setup/common.sh@33 -- # return 0 00:04:53.531 14:51:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.531 14:51:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.531 14:51:16 -- setup/hugepages.sh@27 -- # local node 00:04:53.532 14:51:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.532 14:51:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:53.532 14:51:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.532 14:51:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.532 14:51:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.532 14:51:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.532 14:51:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.532 14:51:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.532 14:51:16 -- setup/common.sh@18 -- # local node=0 00:04:53.532 14:51:16 -- setup/common.sh@19 -- # local var val 00:04:53.532 14:51:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.532 14:51:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.532 14:51:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.532 14:51:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.532 14:51:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.532 14:51:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6493532 kB' 'MemUsed: 5745588 kB' 'SwapCached: 0 kB' 'Active: 497404 kB' 'Inactive: 2753700 kB' 'Active(anon): 128220 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753700 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 3133380 kB' 'Mapped: 51008 kB' 'AnonPages: 119380 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88180 kB' 'Slab: 189712 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.532 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.532 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.533 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.533 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.533 14:51:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.533 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.533 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.533 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.533 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.533 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.533 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.533 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.533 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.533 14:51:16 -- setup/common.sh@32 -- # continue 00:04:53.533 14:51:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.533 14:51:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.533 14:51:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.533 14:51:16 -- setup/common.sh@33 -- # echo 0 00:04:53.533 14:51:16 -- setup/common.sh@33 -- # return 0 00:04:53.533 14:51:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.533 14:51:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.533 14:51:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.533 14:51:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.533 node0=1024 expecting 1024 00:04:53.533 14:51:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:53.533 14:51:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:53.533 00:04:53.533 real 0m1.097s 00:04:53.533 user 0m0.502s 00:04:53.533 sys 0m0.514s 00:04:53.533 ************************************ 00:04:53.533 END TEST default_setup 00:04:53.533 ************************************ 00:04:53.533 14:51:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.533 14:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.533 14:51:16 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:53.533 14:51:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.533 14:51:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.533 14:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.533 ************************************ 00:04:53.533 START TEST per_node_1G_alloc 00:04:53.533 ************************************ 00:04:53.533 14:51:16 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:53.533 14:51:16 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:53.533 14:51:16 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:53.533 14:51:16 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:53.533 14:51:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:53.533 14:51:16 -- setup/hugepages.sh@51 -- # shift 00:04:53.533 14:51:16 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:53.533 14:51:16 -- setup/hugepages.sh@52 -- # local node_ids 00:04:53.533 14:51:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.533 14:51:16 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:53.533 14:51:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:53.533 14:51:16 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:53.533 14:51:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.533 14:51:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.533 14:51:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.533 14:51:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.533 14:51:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.533 14:51:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:53.533 14:51:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.533 14:51:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:53.533 14:51:16 -- setup/hugepages.sh@73 -- # return 0 00:04:53.533 14:51:16 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:53.533 14:51:16 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:53.533 14:51:16 -- setup/hugepages.sh@146 -- # setup output 00:04:53.533 14:51:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.533 14:51:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.791 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.791 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.791 14:51:17 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:53.791 14:51:17 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:53.791 14:51:17 -- setup/hugepages.sh@89 -- # local node 00:04:53.791 14:51:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.791 14:51:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.791 14:51:17 -- setup/hugepages.sh@92 -- # local surp 00:04:53.791 14:51:17 -- setup/hugepages.sh@93 -- # local resv 00:04:53.791 14:51:17 -- setup/hugepages.sh@94 -- # local anon 00:04:53.791 14:51:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.053 14:51:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.053 14:51:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.053 14:51:17 -- setup/common.sh@18 -- # local node= 00:04:54.053 14:51:17 -- setup/common.sh@19 -- # local var val 00:04:54.053 14:51:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.053 14:51:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.053 14:51:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.053 14:51:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.053 14:51:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.053 14:51:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7545132 kB' 'MemAvailable: 10475168 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497424 kB' 'Inactive: 2753708 kB' 'Active(anon): 128240 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119376 kB' 'Mapped: 50968 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189724 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101544 kB' 'KernelStack: 6760 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.053 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.053 14:51:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.054 14:51:17 -- setup/common.sh@33 -- # echo 0 00:04:54.054 14:51:17 -- setup/common.sh@33 -- # return 0 00:04:54.054 14:51:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.054 14:51:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.054 14:51:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.054 14:51:17 -- setup/common.sh@18 -- # local node= 00:04:54.054 14:51:17 -- setup/common.sh@19 -- # local var val 00:04:54.054 14:51:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.054 14:51:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.054 14:51:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.054 14:51:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.054 14:51:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.054 14:51:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7545144 kB' 'MemAvailable: 10475180 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497528 kB' 'Inactive: 2753708 kB' 'Active(anon): 128344 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119196 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189728 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101548 kB' 'KernelStack: 6736 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.054 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.054 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.055 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.055 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.055 14:51:17 -- setup/common.sh@33 -- # echo 0 00:04:54.055 14:51:17 -- setup/common.sh@33 -- # return 0 00:04:54.055 14:51:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.055 14:51:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.055 14:51:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.055 14:51:17 -- setup/common.sh@18 -- # local node= 00:04:54.055 14:51:17 -- setup/common.sh@19 -- # local var val 00:04:54.055 14:51:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.055 14:51:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.055 14:51:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.055 14:51:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.055 14:51:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.056 14:51:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7545404 kB' 'MemAvailable: 10475440 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497608 kB' 'Inactive: 2753708 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119648 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189736 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101556 kB' 'KernelStack: 6784 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.056 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.056 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.057 14:51:17 -- setup/common.sh@33 -- # echo 0 00:04:54.057 14:51:17 -- setup/common.sh@33 -- # return 0 00:04:54.057 nr_hugepages=512 00:04:54.057 14:51:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.057 14:51:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:54.057 resv_hugepages=0 00:04:54.057 14:51:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.057 surplus_hugepages=0 00:04:54.057 anon_hugepages=0 00:04:54.057 14:51:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.057 14:51:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.057 14:51:17 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:54.057 14:51:17 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:54.057 14:51:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.057 14:51:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.057 14:51:17 -- setup/common.sh@18 -- # local node= 00:04:54.057 14:51:17 -- setup/common.sh@19 -- # local var val 00:04:54.057 14:51:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.057 14:51:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.057 14:51:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.057 14:51:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.057 14:51:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.057 14:51:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7545648 kB' 'MemAvailable: 10475684 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497272 kB' 'Inactive: 2753708 kB' 'Active(anon): 128088 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119244 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189724 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101544 kB' 'KernelStack: 6720 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.057 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.057 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.058 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.058 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.058 14:51:17 -- setup/common.sh@33 -- # echo 512 00:04:54.058 14:51:17 -- setup/common.sh@33 -- # return 0 00:04:54.058 14:51:17 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:54.059 14:51:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.059 14:51:17 -- setup/hugepages.sh@27 -- # local node 00:04:54.059 14:51:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.059 14:51:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.059 14:51:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.059 14:51:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.059 14:51:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.059 14:51:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.059 14:51:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.059 14:51:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.059 14:51:17 -- setup/common.sh@18 -- # local node=0 00:04:54.059 14:51:17 -- setup/common.sh@19 -- # local var val 00:04:54.059 14:51:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.059 14:51:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.059 14:51:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.059 14:51:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.059 14:51:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.059 14:51:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7545648 kB' 'MemUsed: 4693472 kB' 'SwapCached: 0 kB' 'Active: 497088 kB' 'Inactive: 2753708 kB' 'Active(anon): 127904 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 3133380 kB' 'Mapped: 51008 kB' 'AnonPages: 119016 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88180 kB' 'Slab: 189720 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.059 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.059 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.060 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.060 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.060 14:51:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.060 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.060 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.060 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.060 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.060 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.060 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.060 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.060 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.060 14:51:17 -- setup/common.sh@32 -- # continue 00:04:54.060 14:51:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.060 14:51:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.060 14:51:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.060 14:51:17 -- setup/common.sh@33 -- # echo 0 00:04:54.060 14:51:17 -- setup/common.sh@33 -- # return 0 00:04:54.060 node0=512 expecting 512 00:04:54.060 ************************************ 00:04:54.060 END TEST per_node_1G_alloc 00:04:54.060 ************************************ 00:04:54.060 14:51:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.060 14:51:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.060 14:51:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.060 14:51:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.060 14:51:17 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.060 14:51:17 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:54.060 00:04:54.060 real 0m0.593s 00:04:54.060 user 0m0.272s 00:04:54.060 sys 0m0.330s 00:04:54.060 14:51:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.060 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:04:54.060 14:51:17 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:54.060 14:51:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.060 14:51:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.060 14:51:17 -- common/autotest_common.sh@10 -- # set +x 00:04:54.060 ************************************ 00:04:54.060 START TEST even_2G_alloc 00:04:54.060 ************************************ 00:04:54.060 14:51:17 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:54.060 14:51:17 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:54.060 14:51:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.060 14:51:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.060 14:51:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.060 14:51:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.060 14:51:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.060 14:51:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.060 14:51:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.060 14:51:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.060 14:51:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.060 14:51:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.060 14:51:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.060 14:51:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.060 14:51:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.060 14:51:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.060 14:51:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:54.060 14:51:17 -- setup/hugepages.sh@83 -- # : 0 00:04:54.060 14:51:17 -- setup/hugepages.sh@84 -- # : 0 00:04:54.060 14:51:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.060 14:51:17 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:54.060 14:51:17 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:54.060 14:51:17 -- setup/hugepages.sh@153 -- # setup output 00:04:54.060 14:51:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.060 14:51:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.631 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.631 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.631 14:51:17 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:54.631 14:51:17 -- setup/hugepages.sh@89 -- # local node 00:04:54.631 14:51:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.631 14:51:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.631 14:51:17 -- setup/hugepages.sh@92 -- # local surp 00:04:54.631 14:51:17 -- setup/hugepages.sh@93 -- # local resv 00:04:54.631 14:51:17 -- setup/hugepages.sh@94 -- # local anon 00:04:54.631 14:51:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.631 14:51:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.631 14:51:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.631 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:54.631 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:54.631 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.631 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.631 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.631 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.631 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.631 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6500032 kB' 'MemAvailable: 9430068 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497952 kB' 'Inactive: 2753708 kB' 'Active(anon): 128768 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 51124 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189736 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101556 kB' 'KernelStack: 6760 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.631 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.631 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.632 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:54.632 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:54.632 14:51:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.632 14:51:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.632 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.632 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:54.632 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:54.632 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.632 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.632 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.632 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.632 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.632 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6500032 kB' 'MemAvailable: 9430068 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497572 kB' 'Inactive: 2753708 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119472 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189748 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101568 kB' 'KernelStack: 6752 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.632 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.632 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.633 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.633 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.633 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:54.633 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:54.633 14:51:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.633 14:51:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.633 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.633 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:54.634 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:54.634 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.634 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.634 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.634 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.634 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.634 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6500032 kB' 'MemAvailable: 9430068 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497416 kB' 'Inactive: 2753708 kB' 'Active(anon): 128232 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119324 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189744 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101564 kB' 'KernelStack: 6768 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.634 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.634 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.635 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:54.635 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:54.635 14:51:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.635 nr_hugepages=1024 00:04:54.635 resv_hugepages=0 00:04:54.635 surplus_hugepages=0 00:04:54.635 anon_hugepages=0 00:04:54.635 14:51:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.635 14:51:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.635 14:51:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.635 14:51:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.635 14:51:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.635 14:51:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.635 14:51:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.635 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.635 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:54.635 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:54.635 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.635 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.635 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.635 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.635 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.635 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6500032 kB' 'MemAvailable: 9430068 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497432 kB' 'Inactive: 2753708 kB' 'Active(anon): 128248 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119332 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189744 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101564 kB' 'KernelStack: 6768 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.635 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.635 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.636 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.636 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.637 14:51:18 -- setup/common.sh@33 -- # echo 1024 00:04:54.637 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:54.637 14:51:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.637 14:51:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.637 14:51:18 -- setup/hugepages.sh@27 -- # local node 00:04:54.637 14:51:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.637 14:51:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.637 14:51:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.637 14:51:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.637 14:51:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.637 14:51:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.637 14:51:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.637 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.637 14:51:18 -- setup/common.sh@18 -- # local node=0 00:04:54.637 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:54.637 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.637 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.637 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.637 14:51:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.637 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.637 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6500032 kB' 'MemUsed: 5739088 kB' 'SwapCached: 0 kB' 'Active: 497176 kB' 'Inactive: 2753708 kB' 'Active(anon): 127992 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 3133380 kB' 'Mapped: 51008 kB' 'AnonPages: 119116 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88180 kB' 'Slab: 189744 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.637 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.637 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # continue 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.638 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.638 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.638 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:54.638 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:54.638 14:51:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.638 14:51:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.638 14:51:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.638 14:51:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.638 node0=1024 expecting 1024 00:04:54.638 ************************************ 00:04:54.638 END TEST even_2G_alloc 00:04:54.638 ************************************ 00:04:54.638 14:51:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.638 14:51:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.638 00:04:54.638 real 0m0.618s 00:04:54.638 user 0m0.299s 00:04:54.638 sys 0m0.319s 00:04:54.638 14:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.638 14:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:54.897 14:51:18 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:54.897 14:51:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.897 14:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.897 14:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:54.897 ************************************ 00:04:54.897 START TEST odd_alloc 00:04:54.897 ************************************ 00:04:54.897 14:51:18 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:54.897 14:51:18 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:54.897 14:51:18 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:54.897 14:51:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.897 14:51:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.897 14:51:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:54.897 14:51:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.897 14:51:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.897 14:51:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.897 14:51:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:54.897 14:51:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.897 14:51:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.897 14:51:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.897 14:51:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.897 14:51:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.897 14:51:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.897 14:51:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:54.897 14:51:18 -- setup/hugepages.sh@83 -- # : 0 00:04:54.897 14:51:18 -- setup/hugepages.sh@84 -- # : 0 00:04:54.897 14:51:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.897 14:51:18 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:54.897 14:51:18 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:54.897 14:51:18 -- setup/hugepages.sh@160 -- # setup output 00:04:54.897 14:51:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.897 14:51:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.157 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.157 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.157 14:51:18 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:55.157 14:51:18 -- setup/hugepages.sh@89 -- # local node 00:04:55.157 14:51:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.157 14:51:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.157 14:51:18 -- setup/hugepages.sh@92 -- # local surp 00:04:55.157 14:51:18 -- setup/hugepages.sh@93 -- # local resv 00:04:55.157 14:51:18 -- setup/hugepages.sh@94 -- # local anon 00:04:55.157 14:51:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.157 14:51:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.157 14:51:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.157 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:55.157 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.157 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.157 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.157 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.157 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.157 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.157 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.157 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6500860 kB' 'MemAvailable: 9430896 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497804 kB' 'Inactive: 2753708 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119756 kB' 'Mapped: 51344 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189744 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101564 kB' 'KernelStack: 6792 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.158 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.158 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.159 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:55.159 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.159 14:51:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.159 14:51:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.159 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.159 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:55.159 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.159 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.159 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.159 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.159 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.159 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.159 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6501112 kB' 'MemAvailable: 9431148 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497320 kB' 'Inactive: 2753708 kB' 'Active(anon): 128136 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119220 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189736 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101556 kB' 'KernelStack: 6752 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.159 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.159 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.421 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.421 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.422 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:55.422 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.422 14:51:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.422 14:51:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.422 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.422 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:55.422 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.422 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.422 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.422 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.422 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.422 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.422 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6501112 kB' 'MemAvailable: 9431148 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497116 kB' 'Inactive: 2753708 kB' 'Active(anon): 127932 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119328 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189736 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101556 kB' 'KernelStack: 6768 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.422 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.422 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.423 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.423 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.424 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:55.424 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.424 nr_hugepages=1025 00:04:55.424 resv_hugepages=0 00:04:55.424 14:51:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.424 14:51:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:55.424 14:51:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.424 surplus_hugepages=0 00:04:55.424 14:51:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.424 14:51:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.424 anon_hugepages=0 00:04:55.424 14:51:18 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.424 14:51:18 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:55.424 14:51:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.424 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.424 14:51:18 -- setup/common.sh@18 -- # local node= 00:04:55.424 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.424 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.424 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.424 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.424 14:51:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.424 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.424 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6501112 kB' 'MemAvailable: 9431148 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497112 kB' 'Inactive: 2753708 kB' 'Active(anon): 127928 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119052 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189724 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101544 kB' 'KernelStack: 6800 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 326820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.424 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.424 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.425 14:51:18 -- setup/common.sh@33 -- # echo 1025 00:04:55.425 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.425 14:51:18 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.425 14:51:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.425 14:51:18 -- setup/hugepages.sh@27 -- # local node 00:04:55.425 14:51:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.425 14:51:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:55.425 14:51:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.425 14:51:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.425 14:51:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.425 14:51:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.425 14:51:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.425 14:51:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.425 14:51:18 -- setup/common.sh@18 -- # local node=0 00:04:55.425 14:51:18 -- setup/common.sh@19 -- # local var val 00:04:55.425 14:51:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.425 14:51:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.425 14:51:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.425 14:51:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.425 14:51:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.425 14:51:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6501112 kB' 'MemUsed: 5738008 kB' 'SwapCached: 0 kB' 'Active: 497184 kB' 'Inactive: 2753708 kB' 'Active(anon): 128000 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133380 kB' 'Mapped: 51008 kB' 'AnonPages: 119124 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88180 kB' 'Slab: 189724 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.425 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.425 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # continue 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.426 14:51:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.426 14:51:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.426 14:51:18 -- setup/common.sh@33 -- # echo 0 00:04:55.426 14:51:18 -- setup/common.sh@33 -- # return 0 00:04:55.426 14:51:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.426 14:51:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.426 14:51:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.426 node0=1025 expecting 1025 00:04:55.426 14:51:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.426 14:51:18 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:55.426 14:51:18 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:55.426 00:04:55.426 real 0m0.621s 00:04:55.426 user 0m0.306s 00:04:55.426 sys 0m0.317s 00:04:55.426 14:51:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.426 ************************************ 00:04:55.426 END TEST odd_alloc 00:04:55.426 ************************************ 00:04:55.426 14:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:55.426 14:51:18 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:55.426 14:51:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.426 14:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.426 14:51:18 -- common/autotest_common.sh@10 -- # set +x 00:04:55.426 ************************************ 00:04:55.426 START TEST custom_alloc 00:04:55.426 ************************************ 00:04:55.426 14:51:18 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:55.426 14:51:18 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:55.426 14:51:18 -- setup/hugepages.sh@169 -- # local node 00:04:55.426 14:51:18 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:55.426 14:51:18 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:55.426 14:51:18 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:55.426 14:51:18 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:55.426 14:51:18 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.426 14:51:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.426 14:51:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.426 14:51:18 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.427 14:51:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.427 14:51:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.427 14:51:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.427 14:51:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.427 14:51:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:55.427 14:51:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.427 14:51:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.427 14:51:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.427 14:51:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.427 14:51:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.427 14:51:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:55.427 14:51:18 -- setup/hugepages.sh@83 -- # : 0 00:04:55.427 14:51:18 -- setup/hugepages.sh@84 -- # : 0 00:04:55.427 14:51:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.427 14:51:18 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:55.427 14:51:18 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:55.427 14:51:18 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.427 14:51:18 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.427 14:51:18 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.427 14:51:18 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:55.427 14:51:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.427 14:51:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.427 14:51:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.427 14:51:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:55.427 14:51:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.427 14:51:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.427 14:51:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.427 14:51:18 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:55.427 14:51:18 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.427 14:51:18 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.427 14:51:18 -- setup/hugepages.sh@78 -- # return 0 00:04:55.427 14:51:18 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:55.427 14:51:18 -- setup/hugepages.sh@187 -- # setup output 00:04:55.427 14:51:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.427 14:51:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.000 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.000 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.000 14:51:19 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:56.000 14:51:19 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:56.000 14:51:19 -- setup/hugepages.sh@89 -- # local node 00:04:56.000 14:51:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.000 14:51:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.000 14:51:19 -- setup/hugepages.sh@92 -- # local surp 00:04:56.000 14:51:19 -- setup/hugepages.sh@93 -- # local resv 00:04:56.000 14:51:19 -- setup/hugepages.sh@94 -- # local anon 00:04:56.000 14:51:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.000 14:51:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.000 14:51:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.000 14:51:19 -- setup/common.sh@18 -- # local node= 00:04:56.000 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.000 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.000 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.000 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.000 14:51:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.000 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.000 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7552108 kB' 'MemAvailable: 10482144 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497744 kB' 'Inactive: 2753708 kB' 'Active(anon): 128560 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119696 kB' 'Mapped: 51184 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189744 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101564 kB' 'KernelStack: 6792 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.000 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.000 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.001 14:51:19 -- setup/common.sh@33 -- # echo 0 00:04:56.001 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.001 14:51:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.001 14:51:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.001 14:51:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.001 14:51:19 -- setup/common.sh@18 -- # local node= 00:04:56.001 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.001 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.001 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.001 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.001 14:51:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.001 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.001 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7552108 kB' 'MemAvailable: 10482144 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497832 kB' 'Inactive: 2753708 kB' 'Active(anon): 128648 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119756 kB' 'Mapped: 51132 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189724 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101544 kB' 'KernelStack: 6744 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.001 14:51:19 -- setup/common.sh@33 -- # echo 0 00:04:56.001 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.001 14:51:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.001 14:51:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.001 14:51:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.001 14:51:19 -- setup/common.sh@18 -- # local node= 00:04:56.001 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.001 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.001 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.001 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.001 14:51:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.001 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.001 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7552108 kB' 'MemAvailable: 10482144 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497432 kB' 'Inactive: 2753708 kB' 'Active(anon): 128248 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119348 kB' 'Mapped: 51008 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189764 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101584 kB' 'KernelStack: 6768 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.001 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.001 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.002 14:51:19 -- setup/common.sh@33 -- # echo 0 00:04:56.002 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.002 nr_hugepages=512 00:04:56.002 14:51:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.002 14:51:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:56.002 resv_hugepages=0 00:04:56.002 surplus_hugepages=0 00:04:56.002 anon_hugepages=0 00:04:56.002 14:51:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.002 14:51:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.002 14:51:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.002 14:51:19 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:56.002 14:51:19 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:56.002 14:51:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.002 14:51:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.002 14:51:19 -- setup/common.sh@18 -- # local node= 00:04:56.002 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.002 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.002 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.002 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.002 14:51:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.002 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.002 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7552108 kB' 'MemAvailable: 10482144 kB' 'Buffers: 2684 kB' 'Cached: 3130696 kB' 'SwapCached: 0 kB' 'Active: 497760 kB' 'Inactive: 2753708 kB' 'Active(anon): 128576 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119668 kB' 'Mapped: 51268 kB' 'Shmem: 10488 kB' 'KReclaimable: 88180 kB' 'Slab: 189764 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101584 kB' 'KernelStack: 6800 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 326988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.002 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.002 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.003 14:51:19 -- setup/common.sh@33 -- # echo 512 00:04:56.003 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.003 14:51:19 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:56.003 14:51:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.003 14:51:19 -- setup/hugepages.sh@27 -- # local node 00:04:56.003 14:51:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.003 14:51:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.003 14:51:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.003 14:51:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.003 14:51:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.003 14:51:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.003 14:51:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.003 14:51:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.003 14:51:19 -- setup/common.sh@18 -- # local node=0 00:04:56.003 14:51:19 -- setup/common.sh@19 -- # local var val 00:04:56.003 14:51:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.003 14:51:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.003 14:51:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.003 14:51:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.003 14:51:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.003 14:51:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7552360 kB' 'MemUsed: 4686760 kB' 'SwapCached: 0 kB' 'Active: 497440 kB' 'Inactive: 2753708 kB' 'Active(anon): 128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133380 kB' 'Mapped: 51008 kB' 'AnonPages: 119376 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88180 kB' 'Slab: 189740 kB' 'SReclaimable: 88180 kB' 'SUnreclaim: 101560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.003 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.003 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.004 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.004 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.004 14:51:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.004 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.004 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.004 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.004 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.004 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.004 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.004 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.004 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.004 14:51:19 -- setup/common.sh@32 -- # continue 00:04:56.004 14:51:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.004 14:51:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.004 14:51:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.004 14:51:19 -- setup/common.sh@33 -- # echo 0 00:04:56.004 14:51:19 -- setup/common.sh@33 -- # return 0 00:04:56.004 node0=512 expecting 512 00:04:56.004 ************************************ 00:04:56.004 END TEST custom_alloc 00:04:56.004 ************************************ 00:04:56.004 14:51:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.004 14:51:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.004 14:51:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.004 14:51:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.004 14:51:19 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.004 14:51:19 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:56.004 00:04:56.004 real 0m0.596s 00:04:56.004 user 0m0.300s 00:04:56.004 sys 0m0.315s 00:04:56.004 14:51:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.004 14:51:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.004 14:51:19 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:56.004 14:51:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.004 14:51:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.004 14:51:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.263 ************************************ 00:04:56.263 START TEST no_shrink_alloc 00:04:56.263 ************************************ 00:04:56.263 14:51:19 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:56.263 14:51:19 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:56.263 14:51:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.263 14:51:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.263 14:51:19 -- setup/hugepages.sh@51 -- # shift 00:04:56.263 14:51:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.263 14:51:19 -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.263 14:51:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.263 14:51:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.263 14:51:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.263 14:51:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.263 14:51:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.263 14:51:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.263 14:51:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:56.263 14:51:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.263 14:51:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.263 14:51:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.263 14:51:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.263 14:51:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.263 14:51:19 -- setup/hugepages.sh@73 -- # return 0 00:04:56.263 14:51:19 -- setup/hugepages.sh@198 -- # setup output 00:04:56.263 14:51:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.263 14:51:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.525 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.525 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.525 14:51:19 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:56.525 14:51:19 -- setup/hugepages.sh@89 -- # local node 00:04:56.525 14:51:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.525 14:51:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.525 14:51:20 -- setup/hugepages.sh@92 -- # local surp 00:04:56.525 14:51:20 -- setup/hugepages.sh@93 -- # local resv 00:04:56.525 14:51:20 -- setup/hugepages.sh@94 -- # local anon 00:04:56.525 14:51:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.525 14:51:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.525 14:51:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.525 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:56.525 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.525 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.525 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.525 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.525 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.525 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.525 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6511780 kB' 'MemAvailable: 9441816 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 495052 kB' 'Inactive: 2753712 kB' 'Active(anon): 125868 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116916 kB' 'Mapped: 50276 kB' 'Shmem: 10488 kB' 'KReclaimable: 88176 kB' 'Slab: 189544 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101368 kB' 'KernelStack: 6664 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 312436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.525 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.525 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.526 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:56.526 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.526 14:51:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.526 14:51:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.526 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.526 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:56.526 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.526 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.526 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.526 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.526 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.526 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.526 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6511780 kB' 'MemAvailable: 9441816 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 494704 kB' 'Inactive: 2753712 kB' 'Active(anon): 125520 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116636 kB' 'Mapped: 50160 kB' 'Shmem: 10488 kB' 'KReclaimable: 88176 kB' 'Slab: 189524 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6672 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 312436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.526 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.526 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.527 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.527 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.528 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:56.528 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.528 14:51:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.528 14:51:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.528 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.528 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:56.528 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.528 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.528 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.528 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.528 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.528 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.528 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6512156 kB' 'MemAvailable: 9442192 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 494916 kB' 'Inactive: 2753712 kB' 'Active(anon): 125732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116848 kB' 'Mapped: 50160 kB' 'Shmem: 10488 kB' 'KReclaimable: 88176 kB' 'Slab: 189524 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6656 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 312436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.528 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.528 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.529 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.529 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.530 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:56.530 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.530 14:51:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.530 nr_hugepages=1024 00:04:56.530 14:51:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.530 resv_hugepages=0 00:04:56.530 14:51:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.530 surplus_hugepages=0 00:04:56.530 14:51:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.530 anon_hugepages=0 00:04:56.530 14:51:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.530 14:51:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.530 14:51:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.530 14:51:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.530 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.530 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:56.530 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.530 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.530 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.530 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.530 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.530 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.530 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6512156 kB' 'MemAvailable: 9442192 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 494732 kB' 'Inactive: 2753712 kB' 'Active(anon): 125548 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116640 kB' 'Mapped: 50160 kB' 'Shmem: 10488 kB' 'KReclaimable: 88176 kB' 'Slab: 189524 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101348 kB' 'KernelStack: 6672 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 312436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.530 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.530 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.791 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.791 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.792 14:51:20 -- setup/common.sh@33 -- # echo 1024 00:04:56.792 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.792 14:51:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.792 14:51:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.792 14:51:20 -- setup/hugepages.sh@27 -- # local node 00:04:56.792 14:51:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.792 14:51:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.792 14:51:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.792 14:51:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.792 14:51:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.792 14:51:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.792 14:51:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.792 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.792 14:51:20 -- setup/common.sh@18 -- # local node=0 00:04:56.792 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:56.792 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.792 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.792 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.792 14:51:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.792 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.792 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6512792 kB' 'MemUsed: 5726328 kB' 'SwapCached: 0 kB' 'Active: 494928 kB' 'Inactive: 2753712 kB' 'Active(anon): 125744 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133384 kB' 'Mapped: 50160 kB' 'AnonPages: 116828 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88176 kB' 'Slab: 189520 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.792 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.792 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # continue 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.793 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.793 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.793 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:56.793 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:56.793 14:51:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.793 14:51:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.793 14:51:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.793 14:51:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.793 node0=1024 expecting 1024 00:04:56.793 14:51:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.793 14:51:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.793 14:51:20 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:56.793 14:51:20 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:56.793 14:51:20 -- setup/hugepages.sh@202 -- # setup output 00:04:56.793 14:51:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.793 14:51:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.054 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.054 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.054 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:57.054 14:51:20 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:57.054 14:51:20 -- setup/hugepages.sh@89 -- # local node 00:04:57.054 14:51:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.054 14:51:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.054 14:51:20 -- setup/hugepages.sh@92 -- # local surp 00:04:57.054 14:51:20 -- setup/hugepages.sh@93 -- # local resv 00:04:57.054 14:51:20 -- setup/hugepages.sh@94 -- # local anon 00:04:57.054 14:51:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.054 14:51:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.054 14:51:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.054 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:57.054 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.054 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.054 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.054 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.054 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.054 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.054 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.054 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6511724 kB' 'MemAvailable: 9441760 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 495304 kB' 'Inactive: 2753712 kB' 'Active(anon): 126120 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117304 kB' 'Mapped: 50248 kB' 'Shmem: 10488 kB' 'KReclaimable: 88176 kB' 'Slab: 189456 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101280 kB' 'KernelStack: 6696 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 312436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.054 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.054 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.054 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.054 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.054 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.054 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.054 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.054 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.054 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.055 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.055 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:57.055 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.055 14:51:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.055 14:51:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.055 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.055 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:57.055 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.055 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.055 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.055 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.055 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.055 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.055 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.055 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6511912 kB' 'MemAvailable: 9441948 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 494904 kB' 'Inactive: 2753712 kB' 'Active(anon): 125720 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117144 kB' 'Mapped: 50204 kB' 'Shmem: 10488 kB' 'KReclaimable: 88176 kB' 'Slab: 189456 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101280 kB' 'KernelStack: 6616 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 312436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.056 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.056 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.057 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:57.057 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.057 14:51:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.057 14:51:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.057 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.057 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:57.057 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.057 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.057 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.057 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.057 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.057 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.057 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6511912 kB' 'MemAvailable: 9441948 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 494696 kB' 'Inactive: 2753712 kB' 'Active(anon): 125512 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116888 kB' 'Mapped: 50160 kB' 'Shmem: 10488 kB' 'KReclaimable: 88176 kB' 'Slab: 189480 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101304 kB' 'KernelStack: 6672 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 312436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.057 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.057 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.320 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:57.320 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.320 14:51:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:57.320 nr_hugepages=1024 00:04:57.320 14:51:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.320 resv_hugepages=0 00:04:57.320 14:51:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.320 surplus_hugepages=0 00:04:57.320 14:51:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.320 anon_hugepages=0 00:04:57.320 14:51:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.320 14:51:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.320 14:51:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.320 14:51:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.320 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.320 14:51:20 -- setup/common.sh@18 -- # local node= 00:04:57.320 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.320 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.320 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.320 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.320 14:51:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.320 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.320 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6511912 kB' 'MemAvailable: 9441948 kB' 'Buffers: 2684 kB' 'Cached: 3130700 kB' 'SwapCached: 0 kB' 'Active: 494944 kB' 'Inactive: 2753712 kB' 'Active(anon): 125760 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116888 kB' 'Mapped: 50160 kB' 'Shmem: 10488 kB' 'KReclaimable: 88176 kB' 'Slab: 189480 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101304 kB' 'KernelStack: 6672 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 312436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.321 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.321 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.322 14:51:20 -- setup/common.sh@33 -- # echo 1024 00:04:57.322 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.322 14:51:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.322 14:51:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.322 14:51:20 -- setup/hugepages.sh@27 -- # local node 00:04:57.322 14:51:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.322 14:51:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.322 14:51:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:57.322 14:51:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.322 14:51:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.322 14:51:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.322 14:51:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.322 14:51:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.322 14:51:20 -- setup/common.sh@18 -- # local node=0 00:04:57.322 14:51:20 -- setup/common.sh@19 -- # local var val 00:04:57.322 14:51:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.322 14:51:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.322 14:51:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.322 14:51:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.322 14:51:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.322 14:51:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6511660 kB' 'MemUsed: 5727460 kB' 'SwapCached: 0 kB' 'Active: 494708 kB' 'Inactive: 2753712 kB' 'Active(anon): 125524 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133384 kB' 'Mapped: 50160 kB' 'AnonPages: 116632 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88176 kB' 'Slab: 189476 kB' 'SReclaimable: 88176 kB' 'SUnreclaim: 101300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 14:51:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # continue 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.323 14:51:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.323 14:51:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.323 14:51:20 -- setup/common.sh@33 -- # echo 0 00:04:57.323 14:51:20 -- setup/common.sh@33 -- # return 0 00:04:57.323 14:51:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.323 14:51:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.323 14:51:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.323 14:51:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.323 node0=1024 expecting 1024 00:04:57.323 14:51:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:57.323 14:51:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:57.323 00:04:57.323 real 0m1.134s 00:04:57.323 user 0m0.585s 00:04:57.323 sys 0m0.617s 00:04:57.323 14:51:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.323 14:51:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.323 ************************************ 00:04:57.323 END TEST no_shrink_alloc 00:04:57.323 ************************************ 00:04:57.323 14:51:20 -- setup/hugepages.sh@217 -- # clear_hp 00:04:57.323 14:51:20 -- setup/hugepages.sh@37 -- # local node hp 00:04:57.323 14:51:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:57.323 14:51:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.323 14:51:20 -- setup/hugepages.sh@41 -- # echo 0 00:04:57.323 14:51:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:57.323 14:51:20 -- setup/hugepages.sh@41 -- # echo 0 00:04:57.323 14:51:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:57.323 14:51:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:57.323 00:04:57.323 real 0m5.249s 00:04:57.323 user 0m2.496s 00:04:57.323 sys 0m2.725s 00:04:57.323 14:51:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.323 14:51:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.323 ************************************ 00:04:57.323 END TEST hugepages 00:04:57.323 ************************************ 00:04:57.323 14:51:20 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:57.323 14:51:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.323 14:51:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.323 14:51:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.323 ************************************ 00:04:57.323 START TEST driver 00:04:57.323 ************************************ 00:04:57.323 14:51:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:57.323 * Looking for test storage... 00:04:57.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.583 14:51:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.583 14:51:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.583 14:51:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.583 14:51:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.583 14:51:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.583 14:51:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.583 14:51:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.583 14:51:20 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.583 14:51:20 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.583 14:51:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.583 14:51:20 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.583 14:51:20 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.583 14:51:20 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.583 14:51:20 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.583 14:51:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.583 14:51:20 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.583 14:51:20 -- scripts/common.sh@344 -- # : 1 00:04:57.583 14:51:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.583 14:51:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.583 14:51:21 -- scripts/common.sh@364 -- # decimal 1 00:04:57.583 14:51:21 -- scripts/common.sh@352 -- # local d=1 00:04:57.583 14:51:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.583 14:51:21 -- scripts/common.sh@354 -- # echo 1 00:04:57.583 14:51:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.583 14:51:21 -- scripts/common.sh@365 -- # decimal 2 00:04:57.583 14:51:21 -- scripts/common.sh@352 -- # local d=2 00:04:57.583 14:51:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.583 14:51:21 -- scripts/common.sh@354 -- # echo 2 00:04:57.583 14:51:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.583 14:51:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.583 14:51:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.583 14:51:21 -- scripts/common.sh@367 -- # return 0 00:04:57.583 14:51:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.583 14:51:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.583 --rc genhtml_branch_coverage=1 00:04:57.583 --rc genhtml_function_coverage=1 00:04:57.583 --rc genhtml_legend=1 00:04:57.583 --rc geninfo_all_blocks=1 00:04:57.583 --rc geninfo_unexecuted_blocks=1 00:04:57.583 00:04:57.583 ' 00:04:57.583 14:51:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.583 --rc genhtml_branch_coverage=1 00:04:57.583 --rc genhtml_function_coverage=1 00:04:57.583 --rc genhtml_legend=1 00:04:57.583 --rc geninfo_all_blocks=1 00:04:57.583 --rc geninfo_unexecuted_blocks=1 00:04:57.583 00:04:57.583 ' 00:04:57.583 14:51:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.583 --rc genhtml_branch_coverage=1 00:04:57.583 --rc genhtml_function_coverage=1 00:04:57.583 --rc genhtml_legend=1 00:04:57.583 --rc geninfo_all_blocks=1 00:04:57.583 --rc geninfo_unexecuted_blocks=1 00:04:57.583 00:04:57.583 ' 00:04:57.583 14:51:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.583 --rc genhtml_branch_coverage=1 00:04:57.583 --rc genhtml_function_coverage=1 00:04:57.583 --rc genhtml_legend=1 00:04:57.583 --rc geninfo_all_blocks=1 00:04:57.583 --rc geninfo_unexecuted_blocks=1 00:04:57.583 00:04:57.583 ' 00:04:57.583 14:51:21 -- setup/driver.sh@68 -- # setup reset 00:04:57.583 14:51:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.583 14:51:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.152 14:51:21 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:58.152 14:51:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.152 14:51:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.152 14:51:21 -- common/autotest_common.sh@10 -- # set +x 00:04:58.152 ************************************ 00:04:58.152 START TEST guess_driver 00:04:58.152 ************************************ 00:04:58.152 14:51:21 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:58.152 14:51:21 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:58.152 14:51:21 -- setup/driver.sh@47 -- # local fail=0 00:04:58.152 14:51:21 -- setup/driver.sh@49 -- # pick_driver 00:04:58.152 14:51:21 -- setup/driver.sh@36 -- # vfio 00:04:58.152 14:51:21 -- setup/driver.sh@21 -- # local iommu_grups 00:04:58.152 14:51:21 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:58.152 14:51:21 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:58.152 14:51:21 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:58.152 14:51:21 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:58.152 14:51:21 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:58.152 14:51:21 -- setup/driver.sh@32 -- # return 1 00:04:58.152 14:51:21 -- setup/driver.sh@38 -- # uio 00:04:58.152 14:51:21 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:58.152 14:51:21 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:58.152 14:51:21 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:58.152 14:51:21 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:58.152 14:51:21 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:58.152 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:58.152 14:51:21 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:58.152 14:51:21 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:58.152 14:51:21 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:58.152 Looking for driver=uio_pci_generic 00:04:58.152 14:51:21 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:58.152 14:51:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.152 14:51:21 -- setup/driver.sh@45 -- # setup output config 00:04:58.152 14:51:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.152 14:51:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.720 14:51:22 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:58.720 14:51:22 -- setup/driver.sh@58 -- # continue 00:04:58.720 14:51:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.980 14:51:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.980 14:51:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:58.980 14:51:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.980 14:51:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.980 14:51:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:58.980 14:51:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.980 14:51:22 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:58.980 14:51:22 -- setup/driver.sh@65 -- # setup reset 00:04:58.980 14:51:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.980 14:51:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.548 00:04:59.548 real 0m1.503s 00:04:59.548 user 0m0.581s 00:04:59.548 sys 0m0.923s 00:04:59.548 ************************************ 00:04:59.548 END TEST guess_driver 00:04:59.548 ************************************ 00:04:59.548 14:51:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.548 14:51:23 -- common/autotest_common.sh@10 -- # set +x 00:04:59.808 00:04:59.808 real 0m2.321s 00:04:59.808 user 0m0.914s 00:04:59.808 sys 0m1.473s 00:04:59.808 14:51:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.808 14:51:23 -- common/autotest_common.sh@10 -- # set +x 00:04:59.808 ************************************ 00:04:59.808 END TEST driver 00:04:59.808 ************************************ 00:04:59.808 14:51:23 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:59.808 14:51:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.808 14:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.808 14:51:23 -- common/autotest_common.sh@10 -- # set +x 00:04:59.808 ************************************ 00:04:59.808 START TEST devices 00:04:59.808 ************************************ 00:04:59.808 14:51:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:59.808 * Looking for test storage... 00:04:59.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.808 14:51:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:59.808 14:51:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:59.808 14:51:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:59.808 14:51:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:59.808 14:51:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:59.808 14:51:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:59.808 14:51:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:59.808 14:51:23 -- scripts/common.sh@335 -- # IFS=.-: 00:04:59.808 14:51:23 -- scripts/common.sh@335 -- # read -ra ver1 00:04:59.808 14:51:23 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.808 14:51:23 -- scripts/common.sh@336 -- # read -ra ver2 00:04:59.808 14:51:23 -- scripts/common.sh@337 -- # local 'op=<' 00:04:59.808 14:51:23 -- scripts/common.sh@339 -- # ver1_l=2 00:04:59.808 14:51:23 -- scripts/common.sh@340 -- # ver2_l=1 00:04:59.808 14:51:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:59.808 14:51:23 -- scripts/common.sh@343 -- # case "$op" in 00:04:59.808 14:51:23 -- scripts/common.sh@344 -- # : 1 00:04:59.808 14:51:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:59.808 14:51:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.808 14:51:23 -- scripts/common.sh@364 -- # decimal 1 00:04:59.808 14:51:23 -- scripts/common.sh@352 -- # local d=1 00:04:59.808 14:51:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.808 14:51:23 -- scripts/common.sh@354 -- # echo 1 00:04:59.808 14:51:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:59.808 14:51:23 -- scripts/common.sh@365 -- # decimal 2 00:04:59.808 14:51:23 -- scripts/common.sh@352 -- # local d=2 00:04:59.808 14:51:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.808 14:51:23 -- scripts/common.sh@354 -- # echo 2 00:04:59.808 14:51:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:59.808 14:51:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:59.808 14:51:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:59.808 14:51:23 -- scripts/common.sh@367 -- # return 0 00:04:59.808 14:51:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.808 14:51:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.808 --rc genhtml_branch_coverage=1 00:04:59.808 --rc genhtml_function_coverage=1 00:04:59.808 --rc genhtml_legend=1 00:04:59.808 --rc geninfo_all_blocks=1 00:04:59.808 --rc geninfo_unexecuted_blocks=1 00:04:59.808 00:04:59.808 ' 00:04:59.808 14:51:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.808 --rc genhtml_branch_coverage=1 00:04:59.808 --rc genhtml_function_coverage=1 00:04:59.808 --rc genhtml_legend=1 00:04:59.808 --rc geninfo_all_blocks=1 00:04:59.808 --rc geninfo_unexecuted_blocks=1 00:04:59.808 00:04:59.808 ' 00:04:59.808 14:51:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.808 --rc genhtml_branch_coverage=1 00:04:59.808 --rc genhtml_function_coverage=1 00:04:59.808 --rc genhtml_legend=1 00:04:59.808 --rc geninfo_all_blocks=1 00:04:59.808 --rc geninfo_unexecuted_blocks=1 00:04:59.808 00:04:59.808 ' 00:04:59.808 14:51:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:59.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.808 --rc genhtml_branch_coverage=1 00:04:59.808 --rc genhtml_function_coverage=1 00:04:59.808 --rc genhtml_legend=1 00:04:59.808 --rc geninfo_all_blocks=1 00:04:59.808 --rc geninfo_unexecuted_blocks=1 00:04:59.808 00:04:59.808 ' 00:04:59.808 14:51:23 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:59.808 14:51:23 -- setup/devices.sh@192 -- # setup reset 00:04:59.808 14:51:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.808 14:51:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.745 14:51:24 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:00.745 14:51:24 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:00.745 14:51:24 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:00.745 14:51:24 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:00.746 14:51:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.746 14:51:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:00.746 14:51:24 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:00.746 14:51:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.746 14:51:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.746 14:51:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.746 14:51:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:00.746 14:51:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:00.746 14:51:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:00.746 14:51:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.746 14:51:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.746 14:51:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:00.746 14:51:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:00.746 14:51:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:00.746 14:51:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.746 14:51:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:00.746 14:51:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:00.746 14:51:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:00.746 14:51:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:00.746 14:51:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:00.746 14:51:24 -- setup/devices.sh@196 -- # blocks=() 00:05:00.746 14:51:24 -- setup/devices.sh@196 -- # declare -a blocks 00:05:00.746 14:51:24 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:00.746 14:51:24 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:00.746 14:51:24 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:00.746 14:51:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.746 14:51:24 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:00.746 14:51:24 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:00.746 14:51:24 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:00.746 14:51:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:00.746 14:51:24 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:00.746 14:51:24 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:00.746 14:51:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:00.746 No valid GPT data, bailing 00:05:00.746 14:51:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.746 14:51:24 -- scripts/common.sh@393 -- # pt= 00:05:00.746 14:51:24 -- scripts/common.sh@394 -- # return 1 00:05:00.746 14:51:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:00.746 14:51:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:00.746 14:51:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:00.746 14:51:24 -- setup/common.sh@80 -- # echo 5368709120 00:05:00.746 14:51:24 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:00.746 14:51:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.746 14:51:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:00.746 14:51:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.746 14:51:24 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:00.746 14:51:24 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:00.746 14:51:24 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:00.746 14:51:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:00.746 14:51:24 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:00.746 14:51:24 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:00.746 14:51:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:00.746 No valid GPT data, bailing 00:05:00.746 14:51:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:01.006 14:51:24 -- scripts/common.sh@393 -- # pt= 00:05:01.006 14:51:24 -- scripts/common.sh@394 -- # return 1 00:05:01.006 14:51:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:01.006 14:51:24 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:01.006 14:51:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:01.006 14:51:24 -- setup/common.sh@80 -- # echo 4294967296 00:05:01.006 14:51:24 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:01.006 14:51:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:01.006 14:51:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:01.006 14:51:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:01.006 14:51:24 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:01.006 14:51:24 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:01.006 14:51:24 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:01.006 14:51:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:01.006 14:51:24 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:01.006 14:51:24 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:01.006 14:51:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:01.006 No valid GPT data, bailing 00:05:01.006 14:51:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:01.006 14:51:24 -- scripts/common.sh@393 -- # pt= 00:05:01.006 14:51:24 -- scripts/common.sh@394 -- # return 1 00:05:01.006 14:51:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:01.006 14:51:24 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:01.006 14:51:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:01.006 14:51:24 -- setup/common.sh@80 -- # echo 4294967296 00:05:01.006 14:51:24 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:01.006 14:51:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:01.006 14:51:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:01.006 14:51:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:01.006 14:51:24 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:01.006 14:51:24 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:01.006 14:51:24 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:01.006 14:51:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:01.006 14:51:24 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:01.006 14:51:24 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:01.006 14:51:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:01.006 No valid GPT data, bailing 00:05:01.006 14:51:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:01.006 14:51:24 -- scripts/common.sh@393 -- # pt= 00:05:01.006 14:51:24 -- scripts/common.sh@394 -- # return 1 00:05:01.006 14:51:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:01.006 14:51:24 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:01.006 14:51:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:01.006 14:51:24 -- setup/common.sh@80 -- # echo 4294967296 00:05:01.006 14:51:24 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:01.006 14:51:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:01.006 14:51:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:01.006 14:51:24 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:01.006 14:51:24 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:01.006 14:51:24 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:01.006 14:51:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.006 14:51:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.006 14:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:01.006 ************************************ 00:05:01.006 START TEST nvme_mount 00:05:01.006 ************************************ 00:05:01.006 14:51:24 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:01.006 14:51:24 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:01.006 14:51:24 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:01.006 14:51:24 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.006 14:51:24 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.006 14:51:24 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:01.006 14:51:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.006 14:51:24 -- setup/common.sh@40 -- # local part_no=1 00:05:01.006 14:51:24 -- setup/common.sh@41 -- # local size=1073741824 00:05:01.006 14:51:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.006 14:51:24 -- setup/common.sh@44 -- # parts=() 00:05:01.006 14:51:24 -- setup/common.sh@44 -- # local parts 00:05:01.006 14:51:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.006 14:51:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.006 14:51:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.006 14:51:24 -- setup/common.sh@46 -- # (( part++ )) 00:05:01.006 14:51:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.006 14:51:24 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:01.006 14:51:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.006 14:51:24 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:02.384 Creating new GPT entries in memory. 00:05:02.384 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:02.384 other utilities. 00:05:02.384 14:51:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:02.384 14:51:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.384 14:51:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.384 14:51:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.384 14:51:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:03.322 Creating new GPT entries in memory. 00:05:03.322 The operation has completed successfully. 00:05:03.322 14:51:26 -- setup/common.sh@57 -- # (( part++ )) 00:05:03.322 14:51:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.322 14:51:26 -- setup/common.sh@62 -- # wait 65883 00:05:03.322 14:51:26 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.322 14:51:26 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:03.322 14:51:26 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.322 14:51:26 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:03.322 14:51:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:03.322 14:51:26 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.322 14:51:26 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:03.322 14:51:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:03.322 14:51:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:03.322 14:51:26 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.322 14:51:26 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:03.322 14:51:26 -- setup/devices.sh@53 -- # local found=0 00:05:03.322 14:51:26 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.322 14:51:26 -- setup/devices.sh@56 -- # : 00:05:03.322 14:51:26 -- setup/devices.sh@59 -- # local pci status 00:05:03.322 14:51:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.322 14:51:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:03.322 14:51:26 -- setup/devices.sh@47 -- # setup output config 00:05:03.322 14:51:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.322 14:51:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.322 14:51:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.322 14:51:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:03.322 14:51:26 -- setup/devices.sh@63 -- # found=1 00:05:03.322 14:51:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.322 14:51:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.322 14:51:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.891 14:51:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.891 14:51:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.891 14:51:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.891 14:51:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.891 14:51:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.891 14:51:27 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:03.891 14:51:27 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.891 14:51:27 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.891 14:51:27 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:03.891 14:51:27 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:03.891 14:51:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.891 14:51:27 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.891 14:51:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.891 14:51:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:03.891 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.891 14:51:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.891 14:51:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.150 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:04.150 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:04.150 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.150 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.150 14:51:27 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:04.150 14:51:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:04.150 14:51:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.150 14:51:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:04.150 14:51:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:04.150 14:51:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.150 14:51:27 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:04.150 14:51:27 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:04.150 14:51:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:04.150 14:51:27 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.150 14:51:27 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:04.150 14:51:27 -- setup/devices.sh@53 -- # local found=0 00:05:04.150 14:51:27 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.150 14:51:27 -- setup/devices.sh@56 -- # : 00:05:04.150 14:51:27 -- setup/devices.sh@59 -- # local pci status 00:05:04.150 14:51:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.150 14:51:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:04.150 14:51:27 -- setup/devices.sh@47 -- # setup output config 00:05:04.150 14:51:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.150 14:51:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.409 14:51:27 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.409 14:51:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:04.409 14:51:27 -- setup/devices.sh@63 -- # found=1 00:05:04.409 14:51:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.409 14:51:27 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.409 14:51:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.668 14:51:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.668 14:51:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.928 14:51:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:04.928 14:51:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.928 14:51:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.928 14:51:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:04.928 14:51:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.928 14:51:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.928 14:51:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:04.928 14:51:28 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.928 14:51:28 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:04.928 14:51:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:04.928 14:51:28 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:04.928 14:51:28 -- setup/devices.sh@50 -- # local mount_point= 00:05:04.928 14:51:28 -- setup/devices.sh@51 -- # local test_file= 00:05:04.928 14:51:28 -- setup/devices.sh@53 -- # local found=0 00:05:04.928 14:51:28 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:04.928 14:51:28 -- setup/devices.sh@59 -- # local pci status 00:05:04.928 14:51:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.928 14:51:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:04.928 14:51:28 -- setup/devices.sh@47 -- # setup output config 00:05:04.928 14:51:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.928 14:51:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:05.187 14:51:28 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:05.187 14:51:28 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:05.187 14:51:28 -- setup/devices.sh@63 -- # found=1 00:05:05.187 14:51:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.187 14:51:28 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:05.187 14:51:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.460 14:51:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:05.460 14:51:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.460 14:51:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:05.460 14:51:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.736 14:51:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.736 14:51:29 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:05.736 14:51:29 -- setup/devices.sh@68 -- # return 0 00:05:05.736 14:51:29 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:05.736 14:51:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:05.736 14:51:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.737 14:51:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.737 14:51:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.737 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.737 00:05:05.737 real 0m4.587s 00:05:05.737 user 0m1.103s 00:05:05.737 sys 0m1.163s 00:05:05.737 14:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.737 ************************************ 00:05:05.737 END TEST nvme_mount 00:05:05.737 ************************************ 00:05:05.737 14:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 14:51:29 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:05.737 14:51:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.737 14:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.737 14:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 ************************************ 00:05:05.737 START TEST dm_mount 00:05:05.737 ************************************ 00:05:05.737 14:51:29 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:05.737 14:51:29 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:05.737 14:51:29 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:05.737 14:51:29 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:05.737 14:51:29 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:05.737 14:51:29 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:05.737 14:51:29 -- setup/common.sh@40 -- # local part_no=2 00:05:05.737 14:51:29 -- setup/common.sh@41 -- # local size=1073741824 00:05:05.737 14:51:29 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:05.737 14:51:29 -- setup/common.sh@44 -- # parts=() 00:05:05.737 14:51:29 -- setup/common.sh@44 -- # local parts 00:05:05.737 14:51:29 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:05.737 14:51:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.737 14:51:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.737 14:51:29 -- setup/common.sh@46 -- # (( part++ )) 00:05:05.737 14:51:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.737 14:51:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.737 14:51:29 -- setup/common.sh@46 -- # (( part++ )) 00:05:05.737 14:51:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.737 14:51:29 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:05.737 14:51:29 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:05.737 14:51:29 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:06.744 Creating new GPT entries in memory. 00:05:06.744 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:06.744 other utilities. 00:05:06.744 14:51:30 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:06.744 14:51:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.744 14:51:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.744 14:51:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.744 14:51:30 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:07.683 Creating new GPT entries in memory. 00:05:07.683 The operation has completed successfully. 00:05:07.683 14:51:31 -- setup/common.sh@57 -- # (( part++ )) 00:05:07.683 14:51:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.683 14:51:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.683 14:51:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.683 14:51:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:09.060 The operation has completed successfully. 00:05:09.060 14:51:32 -- setup/common.sh@57 -- # (( part++ )) 00:05:09.060 14:51:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.060 14:51:32 -- setup/common.sh@62 -- # wait 66347 00:05:09.060 14:51:32 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:09.060 14:51:32 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.060 14:51:32 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:09.060 14:51:32 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:09.060 14:51:32 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:09.060 14:51:32 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.060 14:51:32 -- setup/devices.sh@161 -- # break 00:05:09.060 14:51:32 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.060 14:51:32 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:09.060 14:51:32 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:09.060 14:51:32 -- setup/devices.sh@166 -- # dm=dm-0 00:05:09.060 14:51:32 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:09.060 14:51:32 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:09.060 14:51:32 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.060 14:51:32 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:09.060 14:51:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.060 14:51:32 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:09.060 14:51:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:09.060 14:51:32 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.061 14:51:32 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:09.061 14:51:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:09.061 14:51:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:09.061 14:51:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.061 14:51:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:09.061 14:51:32 -- setup/devices.sh@53 -- # local found=0 00:05:09.061 14:51:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.061 14:51:32 -- setup/devices.sh@56 -- # : 00:05:09.061 14:51:32 -- setup/devices.sh@59 -- # local pci status 00:05:09.061 14:51:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.061 14:51:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:09.061 14:51:32 -- setup/devices.sh@47 -- # setup output config 00:05:09.061 14:51:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.061 14:51:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.061 14:51:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.061 14:51:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:09.061 14:51:32 -- setup/devices.sh@63 -- # found=1 00:05:09.061 14:51:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.061 14:51:32 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.061 14:51:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.319 14:51:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.319 14:51:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.579 14:51:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.579 14:51:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.579 14:51:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.579 14:51:33 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:09.579 14:51:33 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.579 14:51:33 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.579 14:51:33 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:09.579 14:51:33 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:09.579 14:51:33 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:09.579 14:51:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:09.579 14:51:33 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:09.579 14:51:33 -- setup/devices.sh@50 -- # local mount_point= 00:05:09.579 14:51:33 -- setup/devices.sh@51 -- # local test_file= 00:05:09.579 14:51:33 -- setup/devices.sh@53 -- # local found=0 00:05:09.579 14:51:33 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:09.579 14:51:33 -- setup/devices.sh@59 -- # local pci status 00:05:09.579 14:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.579 14:51:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:09.579 14:51:33 -- setup/devices.sh@47 -- # setup output config 00:05:09.579 14:51:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.579 14:51:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.838 14:51:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.838 14:51:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:09.838 14:51:33 -- setup/devices.sh@63 -- # found=1 00:05:09.838 14:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.838 14:51:33 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.838 14:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.097 14:51:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.097 14:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.097 14:51:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.097 14:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.356 14:51:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.356 14:51:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:10.356 14:51:33 -- setup/devices.sh@68 -- # return 0 00:05:10.356 14:51:33 -- setup/devices.sh@187 -- # cleanup_dm 00:05:10.356 14:51:33 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:10.356 14:51:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:10.356 14:51:33 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:10.356 14:51:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.356 14:51:33 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:10.356 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:10.356 14:51:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:10.356 14:51:33 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:10.356 00:05:10.356 real 0m4.609s 00:05:10.356 user 0m0.706s 00:05:10.356 sys 0m0.828s 00:05:10.356 14:51:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.356 ************************************ 00:05:10.356 END TEST dm_mount 00:05:10.356 14:51:33 -- common/autotest_common.sh@10 -- # set +x 00:05:10.356 ************************************ 00:05:10.356 14:51:33 -- setup/devices.sh@1 -- # cleanup 00:05:10.356 14:51:33 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:10.356 14:51:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.356 14:51:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.356 14:51:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:10.356 14:51:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:10.356 14:51:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:10.615 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:10.615 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:10.615 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:10.615 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:10.615 14:51:34 -- setup/devices.sh@12 -- # cleanup_dm 00:05:10.616 14:51:34 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:10.616 14:51:34 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:10.616 14:51:34 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.616 14:51:34 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:10.616 14:51:34 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:10.616 14:51:34 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:10.616 00:05:10.616 real 0m10.907s 00:05:10.616 user 0m2.609s 00:05:10.616 sys 0m2.615s 00:05:10.616 14:51:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.616 ************************************ 00:05:10.616 END TEST devices 00:05:10.616 ************************************ 00:05:10.616 14:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:10.616 00:05:10.616 real 0m23.490s 00:05:10.616 user 0m8.253s 00:05:10.616 sys 0m9.576s 00:05:10.616 14:51:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.616 14:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:10.616 ************************************ 00:05:10.616 END TEST setup.sh 00:05:10.616 ************************************ 00:05:10.616 14:51:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:10.875 Hugepages 00:05:10.875 node hugesize free / total 00:05:10.875 node0 1048576kB 0 / 0 00:05:10.875 node0 2048kB 2048 / 2048 00:05:10.875 00:05:10.875 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.875 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:11.135 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:11.135 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:11.135 14:51:34 -- spdk/autotest.sh@128 -- # uname -s 00:05:11.135 14:51:34 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:11.135 14:51:34 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:11.135 14:51:34 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.962 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.962 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.962 14:51:35 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:12.899 14:51:36 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:12.899 14:51:36 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:12.899 14:51:36 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.899 14:51:36 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:12.899 14:51:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:12.899 14:51:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:12.899 14:51:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.899 14:51:36 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:12.899 14:51:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:13.158 14:51:36 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:13.158 14:51:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:13.158 14:51:36 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.418 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.418 Waiting for block devices as requested 00:05:13.418 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:13.677 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:13.677 14:51:37 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:13.677 14:51:37 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:13.677 14:51:37 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:13.677 14:51:37 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:13.677 14:51:37 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:13.677 14:51:37 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:13.677 14:51:37 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:13.677 14:51:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:13.677 14:51:37 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:13.677 14:51:37 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:13.677 14:51:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:13.677 14:51:37 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:13.677 14:51:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:13.677 14:51:37 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:13.677 14:51:37 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:13.677 14:51:37 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:13.677 14:51:37 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:13.677 14:51:37 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:13.677 14:51:37 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:13.677 14:51:37 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:13.677 14:51:37 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:13.677 14:51:37 -- common/autotest_common.sh@1552 -- # continue 00:05:13.677 14:51:37 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:13.678 14:51:37 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:13.678 14:51:37 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:13.678 14:51:37 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:13.678 14:51:37 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:13.678 14:51:37 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:13.678 14:51:37 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:13.678 14:51:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:13.678 14:51:37 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:13.678 14:51:37 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:13.678 14:51:37 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:13.678 14:51:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:13.678 14:51:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:13.678 14:51:37 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:13.678 14:51:37 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:13.678 14:51:37 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:13.678 14:51:37 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:13.678 14:51:37 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:13.678 14:51:37 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:13.678 14:51:37 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:13.678 14:51:37 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:13.678 14:51:37 -- common/autotest_common.sh@1552 -- # continue 00:05:13.678 14:51:37 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:13.678 14:51:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.678 14:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.936 14:51:37 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:13.936 14:51:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.936 14:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.936 14:51:37 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.504 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.504 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.504 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.763 14:51:38 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:14.763 14:51:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.763 14:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:14.763 14:51:38 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:14.763 14:51:38 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:14.763 14:51:38 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:14.763 14:51:38 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:14.763 14:51:38 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:14.763 14:51:38 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:14.763 14:51:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:14.763 14:51:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:14.763 14:51:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.763 14:51:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:14.763 14:51:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:14.763 14:51:38 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:14.763 14:51:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:14.763 14:51:38 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:14.763 14:51:38 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:14.763 14:51:38 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:14.763 14:51:38 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:14.763 14:51:38 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:14.763 14:51:38 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:14.763 14:51:38 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:14.763 14:51:38 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:14.763 14:51:38 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:14.763 14:51:38 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:14.763 14:51:38 -- common/autotest_common.sh@1588 -- # return 0 00:05:14.763 14:51:38 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:14.763 14:51:38 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:14.763 14:51:38 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:14.763 14:51:38 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:14.763 14:51:38 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:14.763 14:51:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.763 14:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:14.763 14:51:38 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:14.763 14:51:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.763 14:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.763 14:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:14.763 ************************************ 00:05:14.763 START TEST env 00:05:14.763 ************************************ 00:05:14.763 14:51:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:14.763 * Looking for test storage... 00:05:15.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:15.022 14:51:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:15.022 14:51:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:15.022 14:51:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:15.022 14:51:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:15.022 14:51:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:15.022 14:51:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:15.022 14:51:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:15.022 14:51:38 -- scripts/common.sh@335 -- # IFS=.-: 00:05:15.022 14:51:38 -- scripts/common.sh@335 -- # read -ra ver1 00:05:15.022 14:51:38 -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.022 14:51:38 -- scripts/common.sh@336 -- # read -ra ver2 00:05:15.022 14:51:38 -- scripts/common.sh@337 -- # local 'op=<' 00:05:15.022 14:51:38 -- scripts/common.sh@339 -- # ver1_l=2 00:05:15.022 14:51:38 -- scripts/common.sh@340 -- # ver2_l=1 00:05:15.022 14:51:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:15.022 14:51:38 -- scripts/common.sh@343 -- # case "$op" in 00:05:15.022 14:51:38 -- scripts/common.sh@344 -- # : 1 00:05:15.022 14:51:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:15.022 14:51:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.022 14:51:38 -- scripts/common.sh@364 -- # decimal 1 00:05:15.022 14:51:38 -- scripts/common.sh@352 -- # local d=1 00:05:15.022 14:51:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.022 14:51:38 -- scripts/common.sh@354 -- # echo 1 00:05:15.022 14:51:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:15.022 14:51:38 -- scripts/common.sh@365 -- # decimal 2 00:05:15.022 14:51:38 -- scripts/common.sh@352 -- # local d=2 00:05:15.022 14:51:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.022 14:51:38 -- scripts/common.sh@354 -- # echo 2 00:05:15.022 14:51:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:15.022 14:51:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:15.022 14:51:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:15.022 14:51:38 -- scripts/common.sh@367 -- # return 0 00:05:15.022 14:51:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.022 14:51:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:15.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.022 --rc genhtml_branch_coverage=1 00:05:15.022 --rc genhtml_function_coverage=1 00:05:15.022 --rc genhtml_legend=1 00:05:15.022 --rc geninfo_all_blocks=1 00:05:15.022 --rc geninfo_unexecuted_blocks=1 00:05:15.022 00:05:15.022 ' 00:05:15.022 14:51:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:15.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.022 --rc genhtml_branch_coverage=1 00:05:15.022 --rc genhtml_function_coverage=1 00:05:15.022 --rc genhtml_legend=1 00:05:15.022 --rc geninfo_all_blocks=1 00:05:15.022 --rc geninfo_unexecuted_blocks=1 00:05:15.022 00:05:15.022 ' 00:05:15.022 14:51:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:15.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.022 --rc genhtml_branch_coverage=1 00:05:15.022 --rc genhtml_function_coverage=1 00:05:15.022 --rc genhtml_legend=1 00:05:15.022 --rc geninfo_all_blocks=1 00:05:15.022 --rc geninfo_unexecuted_blocks=1 00:05:15.022 00:05:15.022 ' 00:05:15.022 14:51:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:15.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.022 --rc genhtml_branch_coverage=1 00:05:15.022 --rc genhtml_function_coverage=1 00:05:15.022 --rc genhtml_legend=1 00:05:15.022 --rc geninfo_all_blocks=1 00:05:15.022 --rc geninfo_unexecuted_blocks=1 00:05:15.022 00:05:15.022 ' 00:05:15.022 14:51:38 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:15.022 14:51:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.022 14:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.022 14:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:15.022 ************************************ 00:05:15.022 START TEST env_memory 00:05:15.022 ************************************ 00:05:15.022 14:51:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:15.022 00:05:15.022 00:05:15.022 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.022 http://cunit.sourceforge.net/ 00:05:15.022 00:05:15.022 00:05:15.022 Suite: memory 00:05:15.022 Test: alloc and free memory map ...[2024-11-18 14:51:38.529824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:15.022 passed 00:05:15.022 Test: mem map translation ...[2024-11-18 14:51:38.561348] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:15.022 [2024-11-18 14:51:38.561393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:15.023 [2024-11-18 14:51:38.561450] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:15.023 [2024-11-18 14:51:38.561461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:15.282 passed 00:05:15.282 Test: mem map registration ...[2024-11-18 14:51:38.626003] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:15.282 [2024-11-18 14:51:38.626682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:15.282 passed 00:05:15.282 Test: mem map adjacent registrations ...passed 00:05:15.282 00:05:15.282 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.282 suites 1 1 n/a 0 0 00:05:15.282 tests 4 4 4 0 0 00:05:15.282 asserts 152 152 152 0 n/a 00:05:15.282 00:05:15.282 Elapsed time = 0.218 seconds 00:05:15.282 00:05:15.282 real 0m0.240s 00:05:15.282 user 0m0.216s 00:05:15.282 sys 0m0.015s 00:05:15.282 14:51:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.282 14:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:15.282 ************************************ 00:05:15.282 END TEST env_memory 00:05:15.282 ************************************ 00:05:15.282 14:51:38 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:15.282 14:51:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.282 14:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.282 14:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:15.282 ************************************ 00:05:15.282 START TEST env_vtophys 00:05:15.282 ************************************ 00:05:15.282 14:51:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:15.282 EAL: lib.eal log level changed from notice to debug 00:05:15.282 EAL: Detected lcore 0 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 1 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 2 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 3 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 4 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 5 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 6 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 7 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 8 as core 0 on socket 0 00:05:15.282 EAL: Detected lcore 9 as core 0 on socket 0 00:05:15.282 EAL: Maximum logical cores by configuration: 128 00:05:15.282 EAL: Detected CPU lcores: 10 00:05:15.282 EAL: Detected NUMA nodes: 1 00:05:15.282 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:15.282 EAL: Detected shared linkage of DPDK 00:05:15.282 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:15.282 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:15.282 EAL: Registered [vdev] bus. 00:05:15.282 EAL: bus.vdev log level changed from disabled to notice 00:05:15.282 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:15.282 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:15.282 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:15.282 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:15.282 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:15.282 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:15.282 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:15.282 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:15.282 EAL: No shared files mode enabled, IPC will be disabled 00:05:15.282 EAL: No shared files mode enabled, IPC is disabled 00:05:15.282 EAL: Selected IOVA mode 'PA' 00:05:15.282 EAL: Probing VFIO support... 00:05:15.282 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:15.282 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:15.282 EAL: Ask a virtual area of 0x2e000 bytes 00:05:15.282 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:15.282 EAL: Setting up physically contiguous memory... 00:05:15.282 EAL: Setting maximum number of open files to 524288 00:05:15.282 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:15.282 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:15.282 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.282 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:15.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.282 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.282 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:15.282 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:15.282 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.282 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:15.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.282 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.282 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:15.282 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:15.282 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.282 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:15.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.282 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.282 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:15.282 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:15.282 EAL: Ask a virtual area of 0x61000 bytes 00:05:15.282 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:15.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:15.282 EAL: Ask a virtual area of 0x400000000 bytes 00:05:15.282 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:15.282 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:15.282 EAL: Hugepages will be freed exactly as allocated. 00:05:15.282 EAL: No shared files mode enabled, IPC is disabled 00:05:15.282 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: TSC frequency is ~2200000 KHz 00:05:15.541 EAL: Main lcore 0 is ready (tid=7f656161ea00;cpuset=[0]) 00:05:15.541 EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.541 EAL: Restoring previous memory policy: 0 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was expanded by 2MB 00:05:15.541 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:15.541 EAL: Mem event callback 'spdk:(nil)' registered 00:05:15.541 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:15.541 00:05:15.541 00:05:15.541 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.541 http://cunit.sourceforge.net/ 00:05:15.541 00:05:15.541 00:05:15.541 Suite: components_suite 00:05:15.541 Test: vtophys_malloc_test ...passed 00:05:15.541 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.541 EAL: Restoring previous memory policy: 4 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was expanded by 4MB 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was shrunk by 4MB 00:05:15.541 EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.541 EAL: Restoring previous memory policy: 4 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was expanded by 6MB 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was shrunk by 6MB 00:05:15.541 EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.541 EAL: Restoring previous memory policy: 4 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was expanded by 10MB 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was shrunk by 10MB 00:05:15.541 EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.541 EAL: Restoring previous memory policy: 4 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was expanded by 18MB 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was shrunk by 18MB 00:05:15.541 EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.541 EAL: Restoring previous memory policy: 4 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was expanded by 34MB 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was shrunk by 34MB 00:05:15.541 EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.541 EAL: Restoring previous memory policy: 4 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was expanded by 66MB 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was shrunk by 66MB 00:05:15.541 EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.541 EAL: Restoring previous memory policy: 4 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was expanded by 130MB 00:05:15.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.541 EAL: request: mp_malloc_sync 00:05:15.541 EAL: No shared files mode enabled, IPC is disabled 00:05:15.541 EAL: Heap on socket 0 was shrunk by 130MB 00:05:15.541 EAL: Trying to obtain current memory policy. 00:05:15.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.799 EAL: Restoring previous memory policy: 4 00:05:15.799 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.799 EAL: request: mp_malloc_sync 00:05:15.799 EAL: No shared files mode enabled, IPC is disabled 00:05:15.799 EAL: Heap on socket 0 was expanded by 258MB 00:05:15.799 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.799 EAL: request: mp_malloc_sync 00:05:15.799 EAL: No shared files mode enabled, IPC is disabled 00:05:15.799 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.799 EAL: Trying to obtain current memory policy. 00:05:15.799 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.057 EAL: Restoring previous memory policy: 4 00:05:16.057 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.057 EAL: request: mp_malloc_sync 00:05:16.057 EAL: No shared files mode enabled, IPC is disabled 00:05:16.057 EAL: Heap on socket 0 was expanded by 514MB 00:05:16.057 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.057 EAL: request: mp_malloc_sync 00:05:16.057 EAL: No shared files mode enabled, IPC is disabled 00:05:16.057 EAL: Heap on socket 0 was shrunk by 514MB 00:05:16.057 EAL: Trying to obtain current memory policy. 00:05:16.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.316 EAL: Restoring previous memory policy: 4 00:05:16.316 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.316 EAL: request: mp_malloc_sync 00:05:16.316 EAL: No shared files mode enabled, IPC is disabled 00:05:16.316 EAL: Heap on socket 0 was expanded by 1026MB 00:05:16.575 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.835 EAL: request: mp_malloc_sync 00:05:16.835 passed 00:05:16.835 00:05:16.835 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.835 suites 1 1 n/a 0 0 00:05:16.835 tests 2 2 2 0 0 00:05:16.835 asserts 5141 5141 5141 0 n/a 00:05:16.835 00:05:16.835 Elapsed time = 1.271 seconds 00:05:16.835 EAL: No shared files mode enabled, IPC is disabled 00:05:16.835 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:16.835 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.835 EAL: request: mp_malloc_sync 00:05:16.835 EAL: No shared files mode enabled, IPC is disabled 00:05:16.835 EAL: Heap on socket 0 was shrunk by 2MB 00:05:16.835 EAL: No shared files mode enabled, IPC is disabled 00:05:16.835 EAL: No shared files mode enabled, IPC is disabled 00:05:16.835 EAL: No shared files mode enabled, IPC is disabled 00:05:16.835 00:05:16.835 real 0m1.473s 00:05:16.835 user 0m0.816s 00:05:16.835 sys 0m0.521s 00:05:16.835 14:51:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.835 ************************************ 00:05:16.835 END TEST env_vtophys 00:05:16.835 ************************************ 00:05:16.835 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:16.835 14:51:40 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:16.835 14:51:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.835 14:51:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.835 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:16.835 ************************************ 00:05:16.835 START TEST env_pci 00:05:16.835 ************************************ 00:05:16.835 14:51:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:16.835 00:05:16.835 00:05:16.835 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.835 http://cunit.sourceforge.net/ 00:05:16.835 00:05:16.835 00:05:16.835 Suite: pci 00:05:16.835 Test: pci_hook ...[2024-11-18 14:51:40.314924] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67487 has claimed it 00:05:16.835 passed 00:05:16.835 00:05:16.835 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.835 suites 1 1 n/a 0 0 00:05:16.835 tests 1 1 1 0 0 00:05:16.835 asserts 25 25 25 0 n/a 00:05:16.835 00:05:16.835 Elapsed time = 0.002 seconds 00:05:16.835 EAL: Cannot find device (10000:00:01.0) 00:05:16.835 EAL: Failed to attach device on primary process 00:05:16.835 00:05:16.835 real 0m0.019s 00:05:16.835 user 0m0.007s 00:05:16.835 sys 0m0.011s 00:05:16.835 14:51:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.835 ************************************ 00:05:16.835 END TEST env_pci 00:05:16.835 ************************************ 00:05:16.835 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:16.835 14:51:40 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:16.835 14:51:40 -- env/env.sh@15 -- # uname 00:05:16.835 14:51:40 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:16.835 14:51:40 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:16.835 14:51:40 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.835 14:51:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:16.835 14:51:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.835 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:16.835 ************************************ 00:05:16.835 START TEST env_dpdk_post_init 00:05:16.835 ************************************ 00:05:16.835 14:51:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:16.835 EAL: Detected CPU lcores: 10 00:05:16.835 EAL: Detected NUMA nodes: 1 00:05:16.835 EAL: Detected shared linkage of DPDK 00:05:16.835 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.835 EAL: Selected IOVA mode 'PA' 00:05:17.094 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.094 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:17.094 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:17.094 Starting DPDK initialization... 00:05:17.094 Starting SPDK post initialization... 00:05:17.094 SPDK NVMe probe 00:05:17.094 Attaching to 0000:00:06.0 00:05:17.094 Attaching to 0000:00:07.0 00:05:17.094 Attached to 0000:00:06.0 00:05:17.094 Attached to 0000:00:07.0 00:05:17.094 Cleaning up... 00:05:17.094 00:05:17.094 real 0m0.178s 00:05:17.094 user 0m0.045s 00:05:17.094 sys 0m0.034s 00:05:17.094 14:51:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.094 ************************************ 00:05:17.094 END TEST env_dpdk_post_init 00:05:17.094 ************************************ 00:05:17.094 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.095 14:51:40 -- env/env.sh@26 -- # uname 00:05:17.095 14:51:40 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:17.095 14:51:40 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.095 14:51:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.095 14:51:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.095 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.095 ************************************ 00:05:17.095 START TEST env_mem_callbacks 00:05:17.095 ************************************ 00:05:17.095 14:51:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.095 EAL: Detected CPU lcores: 10 00:05:17.095 EAL: Detected NUMA nodes: 1 00:05:17.095 EAL: Detected shared linkage of DPDK 00:05:17.095 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.095 EAL: Selected IOVA mode 'PA' 00:05:17.354 00:05:17.354 00:05:17.354 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.354 http://cunit.sourceforge.net/ 00:05:17.354 00:05:17.354 00:05:17.354 Suite: memory 00:05:17.354 Test: test ... 00:05:17.354 register 0x200000200000 2097152 00:05:17.354 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.354 malloc 3145728 00:05:17.354 register 0x200000400000 4194304 00:05:17.354 buf 0x200000500000 len 3145728 PASSED 00:05:17.354 malloc 64 00:05:17.354 buf 0x2000004fff40 len 64 PASSED 00:05:17.354 malloc 4194304 00:05:17.354 register 0x200000800000 6291456 00:05:17.354 buf 0x200000a00000 len 4194304 PASSED 00:05:17.354 free 0x200000500000 3145728 00:05:17.354 free 0x2000004fff40 64 00:05:17.354 unregister 0x200000400000 4194304 PASSED 00:05:17.354 free 0x200000a00000 4194304 00:05:17.354 unregister 0x200000800000 6291456 PASSED 00:05:17.354 malloc 8388608 00:05:17.354 register 0x200000400000 10485760 00:05:17.354 buf 0x200000600000 len 8388608 PASSED 00:05:17.354 free 0x200000600000 8388608 00:05:17.354 unregister 0x200000400000 10485760 PASSED 00:05:17.354 passed 00:05:17.354 00:05:17.354 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.354 suites 1 1 n/a 0 0 00:05:17.354 tests 1 1 1 0 0 00:05:17.354 asserts 15 15 15 0 n/a 00:05:17.354 00:05:17.354 Elapsed time = 0.008 seconds 00:05:17.354 ************************************ 00:05:17.354 END TEST env_mem_callbacks 00:05:17.354 ************************************ 00:05:17.354 00:05:17.354 real 0m0.143s 00:05:17.354 user 0m0.012s 00:05:17.354 sys 0m0.028s 00:05:17.354 14:51:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.354 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 ************************************ 00:05:17.354 END TEST env 00:05:17.354 ************************************ 00:05:17.354 00:05:17.354 real 0m2.530s 00:05:17.354 user 0m1.296s 00:05:17.354 sys 0m0.858s 00:05:17.354 14:51:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.354 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 14:51:40 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:17.354 14:51:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.354 14:51:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.354 14:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 ************************************ 00:05:17.354 START TEST rpc 00:05:17.354 ************************************ 00:05:17.354 14:51:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:17.354 * Looking for test storage... 00:05:17.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.354 14:51:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:17.354 14:51:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:17.354 14:51:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:17.613 14:51:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:17.613 14:51:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:17.613 14:51:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:17.613 14:51:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:17.613 14:51:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:17.613 14:51:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:17.613 14:51:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.613 14:51:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:17.613 14:51:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:17.613 14:51:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:17.613 14:51:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:17.613 14:51:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:17.613 14:51:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:17.613 14:51:41 -- scripts/common.sh@344 -- # : 1 00:05:17.613 14:51:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:17.613 14:51:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.613 14:51:41 -- scripts/common.sh@364 -- # decimal 1 00:05:17.613 14:51:41 -- scripts/common.sh@352 -- # local d=1 00:05:17.613 14:51:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.613 14:51:41 -- scripts/common.sh@354 -- # echo 1 00:05:17.613 14:51:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:17.613 14:51:41 -- scripts/common.sh@365 -- # decimal 2 00:05:17.613 14:51:41 -- scripts/common.sh@352 -- # local d=2 00:05:17.613 14:51:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.613 14:51:41 -- scripts/common.sh@354 -- # echo 2 00:05:17.613 14:51:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:17.613 14:51:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:17.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.613 14:51:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:17.613 14:51:41 -- scripts/common.sh@367 -- # return 0 00:05:17.613 14:51:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.613 14:51:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:17.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.613 --rc genhtml_branch_coverage=1 00:05:17.613 --rc genhtml_function_coverage=1 00:05:17.613 --rc genhtml_legend=1 00:05:17.613 --rc geninfo_all_blocks=1 00:05:17.613 --rc geninfo_unexecuted_blocks=1 00:05:17.613 00:05:17.613 ' 00:05:17.613 14:51:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:17.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.613 --rc genhtml_branch_coverage=1 00:05:17.613 --rc genhtml_function_coverage=1 00:05:17.613 --rc genhtml_legend=1 00:05:17.613 --rc geninfo_all_blocks=1 00:05:17.613 --rc geninfo_unexecuted_blocks=1 00:05:17.613 00:05:17.613 ' 00:05:17.613 14:51:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:17.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.613 --rc genhtml_branch_coverage=1 00:05:17.613 --rc genhtml_function_coverage=1 00:05:17.613 --rc genhtml_legend=1 00:05:17.613 --rc geninfo_all_blocks=1 00:05:17.613 --rc geninfo_unexecuted_blocks=1 00:05:17.613 00:05:17.613 ' 00:05:17.614 14:51:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.614 --rc genhtml_branch_coverage=1 00:05:17.614 --rc genhtml_function_coverage=1 00:05:17.614 --rc genhtml_legend=1 00:05:17.614 --rc geninfo_all_blocks=1 00:05:17.614 --rc geninfo_unexecuted_blocks=1 00:05:17.614 00:05:17.614 ' 00:05:17.614 14:51:41 -- rpc/rpc.sh@65 -- # spdk_pid=67609 00:05:17.614 14:51:41 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.614 14:51:41 -- rpc/rpc.sh@67 -- # waitforlisten 67609 00:05:17.614 14:51:41 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:17.614 14:51:41 -- common/autotest_common.sh@829 -- # '[' -z 67609 ']' 00:05:17.614 14:51:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.614 14:51:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.614 14:51:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.614 14:51:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.614 14:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.614 [2024-11-18 14:51:41.106355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.614 [2024-11-18 14:51:41.106459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67609 ] 00:05:17.872 [2024-11-18 14:51:41.244566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.872 [2024-11-18 14:51:41.302114] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.872 [2024-11-18 14:51:41.302258] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:17.872 [2024-11-18 14:51:41.302272] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67609' to capture a snapshot of events at runtime. 00:05:17.872 [2024-11-18 14:51:41.302281] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67609 for offline analysis/debug. 00:05:17.872 [2024-11-18 14:51:41.302308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.810 14:51:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.810 14:51:42 -- common/autotest_common.sh@862 -- # return 0 00:05:18.810 14:51:42 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.810 14:51:42 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.810 14:51:42 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:18.810 14:51:42 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:18.810 14:51:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.810 14:51:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.810 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:18.810 ************************************ 00:05:18.810 START TEST rpc_integrity 00:05:18.810 ************************************ 00:05:18.810 14:51:42 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:18.810 14:51:42 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.810 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.810 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:18.810 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.810 14:51:42 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.810 14:51:42 -- rpc/rpc.sh@13 -- # jq length 00:05:18.810 14:51:42 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.810 14:51:42 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.810 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.810 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:18.810 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.810 14:51:42 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:18.810 14:51:42 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.810 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.810 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:18.810 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.810 14:51:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.810 { 00:05:18.810 "aliases": [ 00:05:18.810 "85eae3a0-74ae-4bbc-9c05-ebf26af80d89" 00:05:18.810 ], 00:05:18.810 "assigned_rate_limits": { 00:05:18.810 "r_mbytes_per_sec": 0, 00:05:18.810 "rw_ios_per_sec": 0, 00:05:18.810 "rw_mbytes_per_sec": 0, 00:05:18.810 "w_mbytes_per_sec": 0 00:05:18.810 }, 00:05:18.810 "block_size": 512, 00:05:18.810 "claimed": false, 00:05:18.810 "driver_specific": {}, 00:05:18.810 "memory_domains": [ 00:05:18.810 { 00:05:18.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.810 "dma_device_type": 2 00:05:18.810 } 00:05:18.810 ], 00:05:18.810 "name": "Malloc0", 00:05:18.810 "num_blocks": 16384, 00:05:18.810 "product_name": "Malloc disk", 00:05:18.810 "supported_io_types": { 00:05:18.810 "abort": true, 00:05:18.810 "compare": false, 00:05:18.810 "compare_and_write": false, 00:05:18.810 "flush": true, 00:05:18.810 "nvme_admin": false, 00:05:18.810 "nvme_io": false, 00:05:18.810 "read": true, 00:05:18.810 "reset": true, 00:05:18.810 "unmap": true, 00:05:18.810 "write": true, 00:05:18.810 "write_zeroes": true 00:05:18.810 }, 00:05:18.810 "uuid": "85eae3a0-74ae-4bbc-9c05-ebf26af80d89", 00:05:18.810 "zoned": false 00:05:18.810 } 00:05:18.810 ]' 00:05:18.810 14:51:42 -- rpc/rpc.sh@17 -- # jq length 00:05:18.810 14:51:42 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.810 14:51:42 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:18.810 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.810 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:18.810 [2024-11-18 14:51:42.300586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:18.810 [2024-11-18 14:51:42.300628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.810 [2024-11-18 14:51:42.300644] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa35b60 00:05:18.810 [2024-11-18 14:51:42.300653] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.810 [2024-11-18 14:51:42.301975] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.810 [2024-11-18 14:51:42.302006] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.810 Passthru0 00:05:18.810 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.810 14:51:42 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.810 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.810 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:18.811 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.811 14:51:42 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.811 { 00:05:18.811 "aliases": [ 00:05:18.811 "85eae3a0-74ae-4bbc-9c05-ebf26af80d89" 00:05:18.811 ], 00:05:18.811 "assigned_rate_limits": { 00:05:18.811 "r_mbytes_per_sec": 0, 00:05:18.811 "rw_ios_per_sec": 0, 00:05:18.811 "rw_mbytes_per_sec": 0, 00:05:18.811 "w_mbytes_per_sec": 0 00:05:18.811 }, 00:05:18.811 "block_size": 512, 00:05:18.811 "claim_type": "exclusive_write", 00:05:18.811 "claimed": true, 00:05:18.811 "driver_specific": {}, 00:05:18.811 "memory_domains": [ 00:05:18.811 { 00:05:18.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.811 "dma_device_type": 2 00:05:18.811 } 00:05:18.811 ], 00:05:18.811 "name": "Malloc0", 00:05:18.811 "num_blocks": 16384, 00:05:18.811 "product_name": "Malloc disk", 00:05:18.811 "supported_io_types": { 00:05:18.811 "abort": true, 00:05:18.811 "compare": false, 00:05:18.811 "compare_and_write": false, 00:05:18.811 "flush": true, 00:05:18.811 "nvme_admin": false, 00:05:18.811 "nvme_io": false, 00:05:18.811 "read": true, 00:05:18.811 "reset": true, 00:05:18.811 "unmap": true, 00:05:18.811 "write": true, 00:05:18.811 "write_zeroes": true 00:05:18.811 }, 00:05:18.811 "uuid": "85eae3a0-74ae-4bbc-9c05-ebf26af80d89", 00:05:18.811 "zoned": false 00:05:18.811 }, 00:05:18.811 { 00:05:18.811 "aliases": [ 00:05:18.811 "eb660fd5-1139-56f6-83d4-cbfaf9ad47d0" 00:05:18.811 ], 00:05:18.811 "assigned_rate_limits": { 00:05:18.811 "r_mbytes_per_sec": 0, 00:05:18.811 "rw_ios_per_sec": 0, 00:05:18.811 "rw_mbytes_per_sec": 0, 00:05:18.811 "w_mbytes_per_sec": 0 00:05:18.811 }, 00:05:18.811 "block_size": 512, 00:05:18.811 "claimed": false, 00:05:18.811 "driver_specific": { 00:05:18.811 "passthru": { 00:05:18.811 "base_bdev_name": "Malloc0", 00:05:18.811 "name": "Passthru0" 00:05:18.811 } 00:05:18.811 }, 00:05:18.811 "memory_domains": [ 00:05:18.811 { 00:05:18.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.811 "dma_device_type": 2 00:05:18.811 } 00:05:18.811 ], 00:05:18.811 "name": "Passthru0", 00:05:18.811 "num_blocks": 16384, 00:05:18.811 "product_name": "passthru", 00:05:18.811 "supported_io_types": { 00:05:18.811 "abort": true, 00:05:18.811 "compare": false, 00:05:18.811 "compare_and_write": false, 00:05:18.811 "flush": true, 00:05:18.811 "nvme_admin": false, 00:05:18.811 "nvme_io": false, 00:05:18.811 "read": true, 00:05:18.811 "reset": true, 00:05:18.811 "unmap": true, 00:05:18.811 "write": true, 00:05:18.811 "write_zeroes": true 00:05:18.811 }, 00:05:18.811 "uuid": "eb660fd5-1139-56f6-83d4-cbfaf9ad47d0", 00:05:18.811 "zoned": false 00:05:18.811 } 00:05:18.811 ]' 00:05:18.811 14:51:42 -- rpc/rpc.sh@21 -- # jq length 00:05:18.811 14:51:42 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.811 14:51:42 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.811 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.811 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.070 14:51:42 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:19.070 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.070 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.070 14:51:42 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.070 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.070 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.070 14:51:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.070 14:51:42 -- rpc/rpc.sh@26 -- # jq length 00:05:19.070 ************************************ 00:05:19.070 END TEST rpc_integrity 00:05:19.070 ************************************ 00:05:19.070 14:51:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.070 00:05:19.070 real 0m0.323s 00:05:19.070 user 0m0.217s 00:05:19.070 sys 0m0.034s 00:05:19.070 14:51:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.070 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 14:51:42 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:19.070 14:51:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.070 14:51:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.070 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 ************************************ 00:05:19.070 START TEST rpc_plugins 00:05:19.070 ************************************ 00:05:19.070 14:51:42 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:19.070 14:51:42 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:19.070 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.070 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.070 14:51:42 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:19.070 14:51:42 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:19.070 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.070 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.070 14:51:42 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:19.070 { 00:05:19.070 "aliases": [ 00:05:19.070 "1310597a-f579-435b-b71f-75657dfa7e97" 00:05:19.070 ], 00:05:19.070 "assigned_rate_limits": { 00:05:19.070 "r_mbytes_per_sec": 0, 00:05:19.070 "rw_ios_per_sec": 0, 00:05:19.070 "rw_mbytes_per_sec": 0, 00:05:19.070 "w_mbytes_per_sec": 0 00:05:19.070 }, 00:05:19.070 "block_size": 4096, 00:05:19.070 "claimed": false, 00:05:19.070 "driver_specific": {}, 00:05:19.070 "memory_domains": [ 00:05:19.070 { 00:05:19.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.070 "dma_device_type": 2 00:05:19.070 } 00:05:19.070 ], 00:05:19.070 "name": "Malloc1", 00:05:19.070 "num_blocks": 256, 00:05:19.070 "product_name": "Malloc disk", 00:05:19.070 "supported_io_types": { 00:05:19.070 "abort": true, 00:05:19.070 "compare": false, 00:05:19.070 "compare_and_write": false, 00:05:19.070 "flush": true, 00:05:19.070 "nvme_admin": false, 00:05:19.070 "nvme_io": false, 00:05:19.070 "read": true, 00:05:19.070 "reset": true, 00:05:19.070 "unmap": true, 00:05:19.070 "write": true, 00:05:19.070 "write_zeroes": true 00:05:19.070 }, 00:05:19.070 "uuid": "1310597a-f579-435b-b71f-75657dfa7e97", 00:05:19.070 "zoned": false 00:05:19.070 } 00:05:19.070 ]' 00:05:19.070 14:51:42 -- rpc/rpc.sh@32 -- # jq length 00:05:19.070 14:51:42 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:19.070 14:51:42 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:19.070 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.070 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.070 14:51:42 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:19.070 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.070 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.070 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.070 14:51:42 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:19.070 14:51:42 -- rpc/rpc.sh@36 -- # jq length 00:05:19.329 ************************************ 00:05:19.329 END TEST rpc_plugins 00:05:19.329 ************************************ 00:05:19.329 14:51:42 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:19.329 00:05:19.329 real 0m0.154s 00:05:19.329 user 0m0.103s 00:05:19.329 sys 0m0.018s 00:05:19.329 14:51:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.329 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.329 14:51:42 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:19.329 14:51:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.329 14:51:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.329 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.329 ************************************ 00:05:19.329 START TEST rpc_trace_cmd_test 00:05:19.329 ************************************ 00:05:19.329 14:51:42 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:19.329 14:51:42 -- rpc/rpc.sh@40 -- # local info 00:05:19.329 14:51:42 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:19.329 14:51:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.329 14:51:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.329 14:51:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.329 14:51:42 -- rpc/rpc.sh@42 -- # info='{ 00:05:19.329 "bdev": { 00:05:19.329 "mask": "0x8", 00:05:19.329 "tpoint_mask": "0xffffffffffffffff" 00:05:19.329 }, 00:05:19.329 "bdev_nvme": { 00:05:19.329 "mask": "0x4000", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "blobfs": { 00:05:19.329 "mask": "0x80", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "dsa": { 00:05:19.329 "mask": "0x200", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "ftl": { 00:05:19.329 "mask": "0x40", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "iaa": { 00:05:19.329 "mask": "0x1000", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "iscsi_conn": { 00:05:19.329 "mask": "0x2", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "nvme_pcie": { 00:05:19.329 "mask": "0x800", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "nvme_tcp": { 00:05:19.329 "mask": "0x2000", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "nvmf_rdma": { 00:05:19.329 "mask": "0x10", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "nvmf_tcp": { 00:05:19.329 "mask": "0x20", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "scsi": { 00:05:19.329 "mask": "0x4", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "thread": { 00:05:19.329 "mask": "0x400", 00:05:19.329 "tpoint_mask": "0x0" 00:05:19.329 }, 00:05:19.329 "tpoint_group_mask": "0x8", 00:05:19.329 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67609" 00:05:19.329 }' 00:05:19.329 14:51:42 -- rpc/rpc.sh@43 -- # jq length 00:05:19.329 14:51:42 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:19.329 14:51:42 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:19.329 14:51:42 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:19.329 14:51:42 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:19.329 14:51:42 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:19.588 14:51:42 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:19.588 14:51:42 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:19.588 14:51:42 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:19.588 ************************************ 00:05:19.588 END TEST rpc_trace_cmd_test 00:05:19.588 ************************************ 00:05:19.588 14:51:43 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:19.588 00:05:19.588 real 0m0.269s 00:05:19.588 user 0m0.235s 00:05:19.588 sys 0m0.023s 00:05:19.588 14:51:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.588 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.589 14:51:43 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:19.589 14:51:43 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:19.589 14:51:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.589 14:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.589 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.589 ************************************ 00:05:19.589 START TEST go_rpc 00:05:19.589 ************************************ 00:05:19.589 14:51:43 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:19.589 14:51:43 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:19.589 14:51:43 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:19.589 14:51:43 -- rpc/rpc.sh@52 -- # jq length 00:05:19.589 14:51:43 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:19.589 14:51:43 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.589 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.589 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.589 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.589 14:51:43 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:19.589 14:51:43 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:19.589 14:51:43 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["d87fbe35-7464-4798-b3a5-7ef086c95fe0"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"d87fbe35-7464-4798-b3a5-7ef086c95fe0","zoned":false}]' 00:05:19.589 14:51:43 -- rpc/rpc.sh@57 -- # jq length 00:05:19.847 14:51:43 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:19.847 14:51:43 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:19.847 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.847 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.847 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.847 14:51:43 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:19.847 14:51:43 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:19.847 14:51:43 -- rpc/rpc.sh@61 -- # jq length 00:05:19.847 ************************************ 00:05:19.847 END TEST go_rpc 00:05:19.847 ************************************ 00:05:19.847 14:51:43 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:19.847 00:05:19.847 real 0m0.224s 00:05:19.847 user 0m0.157s 00:05:19.847 sys 0m0.038s 00:05:19.847 14:51:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.847 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.847 14:51:43 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:19.847 14:51:43 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:19.847 14:51:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.847 14:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.847 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.847 ************************************ 00:05:19.847 START TEST rpc_daemon_integrity 00:05:19.847 ************************************ 00:05:19.847 14:51:43 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:19.847 14:51:43 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.847 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.847 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.847 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.847 14:51:43 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.847 14:51:43 -- rpc/rpc.sh@13 -- # jq length 00:05:19.847 14:51:43 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.847 14:51:43 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.847 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.847 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.847 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.847 14:51:43 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:19.847 14:51:43 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.847 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.847 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.107 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.107 14:51:43 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.107 { 00:05:20.107 "aliases": [ 00:05:20.107 "493a2c54-ade6-4a5e-a4ac-16b4b6f2a33c" 00:05:20.107 ], 00:05:20.107 "assigned_rate_limits": { 00:05:20.107 "r_mbytes_per_sec": 0, 00:05:20.107 "rw_ios_per_sec": 0, 00:05:20.107 "rw_mbytes_per_sec": 0, 00:05:20.107 "w_mbytes_per_sec": 0 00:05:20.107 }, 00:05:20.107 "block_size": 512, 00:05:20.107 "claimed": false, 00:05:20.107 "driver_specific": {}, 00:05:20.107 "memory_domains": [ 00:05:20.107 { 00:05:20.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.107 "dma_device_type": 2 00:05:20.107 } 00:05:20.107 ], 00:05:20.107 "name": "Malloc3", 00:05:20.107 "num_blocks": 16384, 00:05:20.107 "product_name": "Malloc disk", 00:05:20.107 "supported_io_types": { 00:05:20.107 "abort": true, 00:05:20.107 "compare": false, 00:05:20.107 "compare_and_write": false, 00:05:20.107 "flush": true, 00:05:20.107 "nvme_admin": false, 00:05:20.107 "nvme_io": false, 00:05:20.107 "read": true, 00:05:20.107 "reset": true, 00:05:20.107 "unmap": true, 00:05:20.107 "write": true, 00:05:20.107 "write_zeroes": true 00:05:20.107 }, 00:05:20.107 "uuid": "493a2c54-ade6-4a5e-a4ac-16b4b6f2a33c", 00:05:20.107 "zoned": false 00:05:20.107 } 00:05:20.107 ]' 00:05:20.107 14:51:43 -- rpc/rpc.sh@17 -- # jq length 00:05:20.107 14:51:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.107 14:51:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:20.107 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.107 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.107 [2024-11-18 14:51:43.489038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:20.107 [2024-11-18 14:51:43.489079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.107 [2024-11-18 14:51:43.489093] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa37990 00:05:20.107 [2024-11-18 14:51:43.489101] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.107 [2024-11-18 14:51:43.490182] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.107 [2024-11-18 14:51:43.490200] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.107 Passthru0 00:05:20.107 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.107 14:51:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.107 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.107 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.107 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.107 14:51:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.107 { 00:05:20.107 "aliases": [ 00:05:20.107 "493a2c54-ade6-4a5e-a4ac-16b4b6f2a33c" 00:05:20.107 ], 00:05:20.107 "assigned_rate_limits": { 00:05:20.107 "r_mbytes_per_sec": 0, 00:05:20.107 "rw_ios_per_sec": 0, 00:05:20.107 "rw_mbytes_per_sec": 0, 00:05:20.107 "w_mbytes_per_sec": 0 00:05:20.107 }, 00:05:20.107 "block_size": 512, 00:05:20.107 "claim_type": "exclusive_write", 00:05:20.107 "claimed": true, 00:05:20.107 "driver_specific": {}, 00:05:20.107 "memory_domains": [ 00:05:20.107 { 00:05:20.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.107 "dma_device_type": 2 00:05:20.107 } 00:05:20.107 ], 00:05:20.107 "name": "Malloc3", 00:05:20.107 "num_blocks": 16384, 00:05:20.107 "product_name": "Malloc disk", 00:05:20.107 "supported_io_types": { 00:05:20.107 "abort": true, 00:05:20.107 "compare": false, 00:05:20.107 "compare_and_write": false, 00:05:20.107 "flush": true, 00:05:20.107 "nvme_admin": false, 00:05:20.107 "nvme_io": false, 00:05:20.107 "read": true, 00:05:20.107 "reset": true, 00:05:20.107 "unmap": true, 00:05:20.107 "write": true, 00:05:20.107 "write_zeroes": true 00:05:20.107 }, 00:05:20.107 "uuid": "493a2c54-ade6-4a5e-a4ac-16b4b6f2a33c", 00:05:20.107 "zoned": false 00:05:20.107 }, 00:05:20.107 { 00:05:20.107 "aliases": [ 00:05:20.107 "08e211ee-8457-5a94-a46a-624b38635eff" 00:05:20.107 ], 00:05:20.107 "assigned_rate_limits": { 00:05:20.107 "r_mbytes_per_sec": 0, 00:05:20.107 "rw_ios_per_sec": 0, 00:05:20.107 "rw_mbytes_per_sec": 0, 00:05:20.107 "w_mbytes_per_sec": 0 00:05:20.107 }, 00:05:20.107 "block_size": 512, 00:05:20.107 "claimed": false, 00:05:20.107 "driver_specific": { 00:05:20.107 "passthru": { 00:05:20.107 "base_bdev_name": "Malloc3", 00:05:20.107 "name": "Passthru0" 00:05:20.107 } 00:05:20.107 }, 00:05:20.107 "memory_domains": [ 00:05:20.107 { 00:05:20.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.107 "dma_device_type": 2 00:05:20.107 } 00:05:20.107 ], 00:05:20.107 "name": "Passthru0", 00:05:20.107 "num_blocks": 16384, 00:05:20.107 "product_name": "passthru", 00:05:20.107 "supported_io_types": { 00:05:20.107 "abort": true, 00:05:20.107 "compare": false, 00:05:20.107 "compare_and_write": false, 00:05:20.107 "flush": true, 00:05:20.107 "nvme_admin": false, 00:05:20.107 "nvme_io": false, 00:05:20.107 "read": true, 00:05:20.107 "reset": true, 00:05:20.107 "unmap": true, 00:05:20.107 "write": true, 00:05:20.107 "write_zeroes": true 00:05:20.107 }, 00:05:20.107 "uuid": "08e211ee-8457-5a94-a46a-624b38635eff", 00:05:20.107 "zoned": false 00:05:20.107 } 00:05:20.107 ]' 00:05:20.107 14:51:43 -- rpc/rpc.sh@21 -- # jq length 00:05:20.107 14:51:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.107 14:51:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.107 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.107 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.107 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.107 14:51:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:20.107 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.107 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.107 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.107 14:51:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.107 14:51:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.107 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.107 14:51:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.107 14:51:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.107 14:51:43 -- rpc/rpc.sh@26 -- # jq length 00:05:20.107 14:51:43 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.107 ************************************ 00:05:20.107 END TEST rpc_daemon_integrity 00:05:20.107 ************************************ 00:05:20.107 00:05:20.107 real 0m0.322s 00:05:20.107 user 0m0.219s 00:05:20.107 sys 0m0.033s 00:05:20.107 14:51:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.107 14:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.366 14:51:43 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:20.366 14:51:43 -- rpc/rpc.sh@84 -- # killprocess 67609 00:05:20.366 14:51:43 -- common/autotest_common.sh@936 -- # '[' -z 67609 ']' 00:05:20.366 14:51:43 -- common/autotest_common.sh@940 -- # kill -0 67609 00:05:20.366 14:51:43 -- common/autotest_common.sh@941 -- # uname 00:05:20.366 14:51:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.366 14:51:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67609 00:05:20.366 killing process with pid 67609 00:05:20.366 14:51:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.366 14:51:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.366 14:51:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67609' 00:05:20.366 14:51:43 -- common/autotest_common.sh@955 -- # kill 67609 00:05:20.366 14:51:43 -- common/autotest_common.sh@960 -- # wait 67609 00:05:20.624 00:05:20.624 real 0m3.227s 00:05:20.624 user 0m4.306s 00:05:20.624 sys 0m0.744s 00:05:20.624 14:51:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.624 ************************************ 00:05:20.624 END TEST rpc 00:05:20.624 ************************************ 00:05:20.624 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.624 14:51:44 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:20.625 14:51:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.625 14:51:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.625 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.625 ************************************ 00:05:20.625 START TEST rpc_client 00:05:20.625 ************************************ 00:05:20.625 14:51:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:20.625 * Looking for test storage... 00:05:20.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:20.884 14:51:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:20.884 14:51:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:20.884 14:51:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:20.884 14:51:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:20.884 14:51:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:20.884 14:51:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:20.884 14:51:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:20.884 14:51:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:20.884 14:51:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:20.884 14:51:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.884 14:51:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:20.884 14:51:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:20.884 14:51:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:20.884 14:51:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:20.884 14:51:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:20.884 14:51:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:20.884 14:51:44 -- scripts/common.sh@344 -- # : 1 00:05:20.884 14:51:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:20.884 14:51:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.884 14:51:44 -- scripts/common.sh@364 -- # decimal 1 00:05:20.884 14:51:44 -- scripts/common.sh@352 -- # local d=1 00:05:20.884 14:51:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.884 14:51:44 -- scripts/common.sh@354 -- # echo 1 00:05:20.884 14:51:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:20.884 14:51:44 -- scripts/common.sh@365 -- # decimal 2 00:05:20.884 14:51:44 -- scripts/common.sh@352 -- # local d=2 00:05:20.884 14:51:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.884 14:51:44 -- scripts/common.sh@354 -- # echo 2 00:05:20.884 14:51:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:20.884 14:51:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:20.884 14:51:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:20.884 14:51:44 -- scripts/common.sh@367 -- # return 0 00:05:20.884 14:51:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.884 14:51:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.884 --rc genhtml_branch_coverage=1 00:05:20.884 --rc genhtml_function_coverage=1 00:05:20.884 --rc genhtml_legend=1 00:05:20.884 --rc geninfo_all_blocks=1 00:05:20.884 --rc geninfo_unexecuted_blocks=1 00:05:20.884 00:05:20.884 ' 00:05:20.884 14:51:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.884 --rc genhtml_branch_coverage=1 00:05:20.884 --rc genhtml_function_coverage=1 00:05:20.884 --rc genhtml_legend=1 00:05:20.884 --rc geninfo_all_blocks=1 00:05:20.884 --rc geninfo_unexecuted_blocks=1 00:05:20.884 00:05:20.884 ' 00:05:20.884 14:51:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.884 --rc genhtml_branch_coverage=1 00:05:20.884 --rc genhtml_function_coverage=1 00:05:20.884 --rc genhtml_legend=1 00:05:20.884 --rc geninfo_all_blocks=1 00:05:20.884 --rc geninfo_unexecuted_blocks=1 00:05:20.884 00:05:20.884 ' 00:05:20.884 14:51:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:20.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.884 --rc genhtml_branch_coverage=1 00:05:20.884 --rc genhtml_function_coverage=1 00:05:20.884 --rc genhtml_legend=1 00:05:20.884 --rc geninfo_all_blocks=1 00:05:20.884 --rc geninfo_unexecuted_blocks=1 00:05:20.884 00:05:20.884 ' 00:05:20.884 14:51:44 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:20.884 OK 00:05:20.884 14:51:44 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:20.884 00:05:20.884 real 0m0.175s 00:05:20.884 user 0m0.101s 00:05:20.884 sys 0m0.086s 00:05:20.884 ************************************ 00:05:20.884 END TEST rpc_client 00:05:20.884 ************************************ 00:05:20.884 14:51:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.884 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.884 14:51:44 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:20.884 14:51:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.884 14:51:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.884 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:20.884 ************************************ 00:05:20.884 START TEST json_config 00:05:20.884 ************************************ 00:05:20.884 14:51:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:20.884 14:51:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:20.884 14:51:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:20.884 14:51:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.144 14:51:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.144 14:51:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.144 14:51:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.144 14:51:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.144 14:51:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.144 14:51:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.144 14:51:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.144 14:51:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.144 14:51:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.144 14:51:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.144 14:51:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.144 14:51:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.144 14:51:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.144 14:51:44 -- scripts/common.sh@344 -- # : 1 00:05:21.144 14:51:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.144 14:51:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.144 14:51:44 -- scripts/common.sh@364 -- # decimal 1 00:05:21.144 14:51:44 -- scripts/common.sh@352 -- # local d=1 00:05:21.144 14:51:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.144 14:51:44 -- scripts/common.sh@354 -- # echo 1 00:05:21.144 14:51:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.144 14:51:44 -- scripts/common.sh@365 -- # decimal 2 00:05:21.144 14:51:44 -- scripts/common.sh@352 -- # local d=2 00:05:21.144 14:51:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.144 14:51:44 -- scripts/common.sh@354 -- # echo 2 00:05:21.144 14:51:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.144 14:51:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.144 14:51:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.144 14:51:44 -- scripts/common.sh@367 -- # return 0 00:05:21.144 14:51:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.144 14:51:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.144 --rc genhtml_branch_coverage=1 00:05:21.144 --rc genhtml_function_coverage=1 00:05:21.144 --rc genhtml_legend=1 00:05:21.144 --rc geninfo_all_blocks=1 00:05:21.144 --rc geninfo_unexecuted_blocks=1 00:05:21.144 00:05:21.144 ' 00:05:21.144 14:51:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.144 --rc genhtml_branch_coverage=1 00:05:21.144 --rc genhtml_function_coverage=1 00:05:21.144 --rc genhtml_legend=1 00:05:21.144 --rc geninfo_all_blocks=1 00:05:21.144 --rc geninfo_unexecuted_blocks=1 00:05:21.144 00:05:21.144 ' 00:05:21.144 14:51:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.144 --rc genhtml_branch_coverage=1 00:05:21.144 --rc genhtml_function_coverage=1 00:05:21.144 --rc genhtml_legend=1 00:05:21.144 --rc geninfo_all_blocks=1 00:05:21.144 --rc geninfo_unexecuted_blocks=1 00:05:21.144 00:05:21.144 ' 00:05:21.144 14:51:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.144 --rc genhtml_branch_coverage=1 00:05:21.144 --rc genhtml_function_coverage=1 00:05:21.144 --rc genhtml_legend=1 00:05:21.144 --rc geninfo_all_blocks=1 00:05:21.144 --rc geninfo_unexecuted_blocks=1 00:05:21.144 00:05:21.144 ' 00:05:21.144 14:51:44 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:21.144 14:51:44 -- nvmf/common.sh@7 -- # uname -s 00:05:21.144 14:51:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.144 14:51:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.144 14:51:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.144 14:51:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.144 14:51:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.144 14:51:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.144 14:51:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.144 14:51:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.144 14:51:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.144 14:51:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.144 14:51:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:05:21.144 14:51:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:05:21.144 14:51:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.144 14:51:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.144 14:51:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.144 14:51:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:21.144 14:51:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.144 14:51:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.144 14:51:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.144 14:51:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.144 14:51:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.144 14:51:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.144 14:51:44 -- paths/export.sh@5 -- # export PATH 00:05:21.144 14:51:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.144 14:51:44 -- nvmf/common.sh@46 -- # : 0 00:05:21.144 14:51:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:21.144 14:51:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:21.144 14:51:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:21.144 14:51:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.144 14:51:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.144 14:51:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:21.144 14:51:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:21.144 14:51:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:21.144 14:51:44 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:21.144 14:51:44 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:21.144 14:51:44 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:21.144 14:51:44 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:21.144 14:51:44 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:21.144 INFO: JSON configuration test init 00:05:21.144 14:51:44 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:21.144 14:51:44 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:21.144 14:51:44 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:21.144 14:51:44 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:21.144 14:51:44 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:21.144 14:51:44 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:21.144 14:51:44 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:21.144 14:51:44 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:21.144 14:51:44 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.144 14:51:44 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:21.144 14:51:44 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:21.144 14:51:44 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:21.144 14:51:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.144 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.144 14:51:44 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:21.145 14:51:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.145 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.145 14:51:44 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:21.145 14:51:44 -- json_config/json_config.sh@98 -- # local app=target 00:05:21.145 14:51:44 -- json_config/json_config.sh@99 -- # shift 00:05:21.145 14:51:44 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:21.145 14:51:44 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:21.145 14:51:44 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:21.145 14:51:44 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:21.145 14:51:44 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:21.145 14:51:44 -- json_config/json_config.sh@111 -- # app_pid[$app]=67925 00:05:21.145 14:51:44 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:21.145 Waiting for target to run... 00:05:21.145 14:51:44 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:21.145 14:51:44 -- json_config/json_config.sh@114 -- # waitforlisten 67925 /var/tmp/spdk_tgt.sock 00:05:21.145 14:51:44 -- common/autotest_common.sh@829 -- # '[' -z 67925 ']' 00:05:21.145 14:51:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.145 14:51:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.145 14:51:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.145 14:51:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.145 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.145 [2024-11-18 14:51:44.644147] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.145 [2024-11-18 14:51:44.644517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67925 ] 00:05:21.712 [2024-11-18 14:51:45.107046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.712 [2024-11-18 14:51:45.158541] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.712 [2024-11-18 14:51:45.158689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.279 00:05:22.279 14:51:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.279 14:51:45 -- common/autotest_common.sh@862 -- # return 0 00:05:22.279 14:51:45 -- json_config/json_config.sh@115 -- # echo '' 00:05:22.279 14:51:45 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:22.279 14:51:45 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:22.279 14:51:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.279 14:51:45 -- common/autotest_common.sh@10 -- # set +x 00:05:22.279 14:51:45 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:22.279 14:51:45 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:22.279 14:51:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.279 14:51:45 -- common/autotest_common.sh@10 -- # set +x 00:05:22.279 14:51:45 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:22.279 14:51:45 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:22.279 14:51:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:22.846 14:51:46 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:22.846 14:51:46 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:22.846 14:51:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.846 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.846 14:51:46 -- json_config/json_config.sh@48 -- # local ret=0 00:05:22.846 14:51:46 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:22.846 14:51:46 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:22.846 14:51:46 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:22.846 14:51:46 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:22.846 14:51:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:22.846 14:51:46 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:22.846 14:51:46 -- json_config/json_config.sh@51 -- # local get_types 00:05:22.846 14:51:46 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:22.846 14:51:46 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:22.846 14:51:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.846 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:23.104 14:51:46 -- json_config/json_config.sh@58 -- # return 0 00:05:23.104 14:51:46 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:23.104 14:51:46 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:23.104 14:51:46 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:23.104 14:51:46 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:23.104 14:51:46 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:23.104 14:51:46 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:23.104 14:51:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.104 14:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:23.104 14:51:46 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:23.104 14:51:46 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:23.104 14:51:46 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:23.104 14:51:46 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:23.104 14:51:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:23.362 MallocForNvmf0 00:05:23.362 14:51:46 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:23.362 14:51:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:23.620 MallocForNvmf1 00:05:23.620 14:51:47 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:23.620 14:51:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:23.877 [2024-11-18 14:51:47.228530] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.877 14:51:47 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:23.877 14:51:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.135 14:51:47 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.135 14:51:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.394 14:51:47 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:24.394 14:51:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:24.653 14:51:48 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:24.653 14:51:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:24.653 [2024-11-18 14:51:48.209036] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.653 14:51:48 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:24.653 14:51:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.653 14:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:24.913 14:51:48 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:24.913 14:51:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.913 14:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:24.913 14:51:48 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:24.913 14:51:48 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.913 14:51:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:25.172 MallocBdevForConfigChangeCheck 00:05:25.172 14:51:48 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:25.172 14:51:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.172 14:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:25.172 14:51:48 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:25.172 14:51:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.430 INFO: shutting down applications... 00:05:25.430 14:51:48 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:25.430 14:51:48 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:25.430 14:51:48 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:25.430 14:51:48 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:25.430 14:51:48 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:25.997 Calling clear_iscsi_subsystem 00:05:25.997 Calling clear_nvmf_subsystem 00:05:25.997 Calling clear_nbd_subsystem 00:05:25.997 Calling clear_ublk_subsystem 00:05:25.997 Calling clear_vhost_blk_subsystem 00:05:25.997 Calling clear_vhost_scsi_subsystem 00:05:25.997 Calling clear_scheduler_subsystem 00:05:25.997 Calling clear_bdev_subsystem 00:05:25.997 Calling clear_accel_subsystem 00:05:25.997 Calling clear_vmd_subsystem 00:05:25.997 Calling clear_sock_subsystem 00:05:25.997 Calling clear_iobuf_subsystem 00:05:25.997 14:51:49 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:25.997 14:51:49 -- json_config/json_config.sh@396 -- # count=100 00:05:25.997 14:51:49 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:25.997 14:51:49 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.997 14:51:49 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:25.997 14:51:49 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:26.255 14:51:49 -- json_config/json_config.sh@398 -- # break 00:05:26.255 14:51:49 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:26.255 14:51:49 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:26.255 14:51:49 -- json_config/json_config.sh@120 -- # local app=target 00:05:26.255 14:51:49 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:26.255 14:51:49 -- json_config/json_config.sh@124 -- # [[ -n 67925 ]] 00:05:26.255 14:51:49 -- json_config/json_config.sh@127 -- # kill -SIGINT 67925 00:05:26.255 14:51:49 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:26.255 14:51:49 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:26.255 14:51:49 -- json_config/json_config.sh@130 -- # kill -0 67925 00:05:26.255 14:51:49 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:26.822 14:51:50 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:26.822 14:51:50 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:26.822 14:51:50 -- json_config/json_config.sh@130 -- # kill -0 67925 00:05:26.822 14:51:50 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:26.822 14:51:50 -- json_config/json_config.sh@132 -- # break 00:05:26.822 SPDK target shutdown done 00:05:26.822 14:51:50 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:26.822 14:51:50 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:26.822 INFO: relaunching applications... 00:05:26.822 14:51:50 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:26.822 14:51:50 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:26.822 14:51:50 -- json_config/json_config.sh@98 -- # local app=target 00:05:26.822 14:51:50 -- json_config/json_config.sh@99 -- # shift 00:05:26.822 14:51:50 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:26.822 14:51:50 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:26.822 14:51:50 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:26.822 14:51:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:26.822 14:51:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:26.822 14:51:50 -- json_config/json_config.sh@111 -- # app_pid[$app]=68205 00:05:26.822 Waiting for target to run... 00:05:26.822 14:51:50 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:26.822 14:51:50 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:26.822 14:51:50 -- json_config/json_config.sh@114 -- # waitforlisten 68205 /var/tmp/spdk_tgt.sock 00:05:26.822 14:51:50 -- common/autotest_common.sh@829 -- # '[' -z 68205 ']' 00:05:26.822 14:51:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.822 14:51:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.822 14:51:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.822 14:51:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.822 14:51:50 -- common/autotest_common.sh@10 -- # set +x 00:05:26.822 [2024-11-18 14:51:50.288556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.822 [2024-11-18 14:51:50.288665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68205 ] 00:05:27.389 [2024-11-18 14:51:50.703800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.389 [2024-11-18 14:51:50.750578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.389 [2024-11-18 14:51:50.750751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.647 [2024-11-18 14:51:51.045919] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.647 [2024-11-18 14:51:51.077993] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:27.910 14:51:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.910 14:51:51 -- common/autotest_common.sh@862 -- # return 0 00:05:27.910 00:05:27.910 14:51:51 -- json_config/json_config.sh@115 -- # echo '' 00:05:27.910 14:51:51 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:27.910 14:51:51 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:27.910 INFO: Checking if target configuration is the same... 00:05:27.910 14:51:51 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:27.910 14:51:51 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:27.910 14:51:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.910 + '[' 2 -ne 2 ']' 00:05:27.910 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:27.910 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:27.910 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:27.910 +++ basename /dev/fd/62 00:05:27.910 ++ mktemp /tmp/62.XXX 00:05:27.910 + tmp_file_1=/tmp/62.5vO 00:05:27.910 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:27.910 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.910 + tmp_file_2=/tmp/spdk_tgt_config.json.gfS 00:05:27.910 + ret=0 00:05:27.910 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:28.227 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:28.227 + diff -u /tmp/62.5vO /tmp/spdk_tgt_config.json.gfS 00:05:28.227 INFO: JSON config files are the same 00:05:28.227 + echo 'INFO: JSON config files are the same' 00:05:28.227 + rm /tmp/62.5vO /tmp/spdk_tgt_config.json.gfS 00:05:28.227 + exit 0 00:05:28.227 14:51:51 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:28.227 INFO: changing configuration and checking if this can be detected... 00:05:28.227 14:51:51 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:28.227 14:51:51 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:28.227 14:51:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:28.514 14:51:52 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:28.514 14:51:52 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:28.514 14:51:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.514 + '[' 2 -ne 2 ']' 00:05:28.514 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:28.514 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:28.514 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:28.514 +++ basename /dev/fd/62 00:05:28.514 ++ mktemp /tmp/62.XXX 00:05:28.514 + tmp_file_1=/tmp/62.sSs 00:05:28.514 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:28.514 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.514 + tmp_file_2=/tmp/spdk_tgt_config.json.zjb 00:05:28.514 + ret=0 00:05:28.514 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:29.082 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:29.082 + diff -u /tmp/62.sSs /tmp/spdk_tgt_config.json.zjb 00:05:29.082 + ret=1 00:05:29.082 + echo '=== Start of file: /tmp/62.sSs ===' 00:05:29.082 + cat /tmp/62.sSs 00:05:29.082 + echo '=== End of file: /tmp/62.sSs ===' 00:05:29.082 + echo '' 00:05:29.082 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zjb ===' 00:05:29.082 + cat /tmp/spdk_tgt_config.json.zjb 00:05:29.082 + echo '=== End of file: /tmp/spdk_tgt_config.json.zjb ===' 00:05:29.082 + echo '' 00:05:29.082 + rm /tmp/62.sSs /tmp/spdk_tgt_config.json.zjb 00:05:29.082 + exit 1 00:05:29.082 INFO: configuration change detected. 00:05:29.082 14:51:52 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:29.082 14:51:52 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:29.082 14:51:52 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:29.082 14:51:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.082 14:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:29.082 14:51:52 -- json_config/json_config.sh@360 -- # local ret=0 00:05:29.082 14:51:52 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:29.082 14:51:52 -- json_config/json_config.sh@370 -- # [[ -n 68205 ]] 00:05:29.082 14:51:52 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:29.082 14:51:52 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:29.082 14:51:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.082 14:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:29.082 14:51:52 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:29.082 14:51:52 -- json_config/json_config.sh@246 -- # uname -s 00:05:29.082 14:51:52 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:29.082 14:51:52 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:29.082 14:51:52 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:29.082 14:51:52 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:29.082 14:51:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.082 14:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:29.082 14:51:52 -- json_config/json_config.sh@376 -- # killprocess 68205 00:05:29.082 14:51:52 -- common/autotest_common.sh@936 -- # '[' -z 68205 ']' 00:05:29.082 14:51:52 -- common/autotest_common.sh@940 -- # kill -0 68205 00:05:29.082 14:51:52 -- common/autotest_common.sh@941 -- # uname 00:05:29.082 14:51:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.082 14:51:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68205 00:05:29.082 14:51:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.082 14:51:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.082 killing process with pid 68205 00:05:29.082 14:51:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68205' 00:05:29.082 14:51:52 -- common/autotest_common.sh@955 -- # kill 68205 00:05:29.082 14:51:52 -- common/autotest_common.sh@960 -- # wait 68205 00:05:29.342 14:51:52 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.342 14:51:52 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:29.342 14:51:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.342 14:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:29.342 14:51:52 -- json_config/json_config.sh@381 -- # return 0 00:05:29.342 INFO: Success 00:05:29.342 14:51:52 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:29.342 ************************************ 00:05:29.342 END TEST json_config 00:05:29.342 ************************************ 00:05:29.342 00:05:29.342 real 0m8.507s 00:05:29.342 user 0m12.113s 00:05:29.342 sys 0m1.912s 00:05:29.342 14:51:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.342 14:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:29.342 14:51:52 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:29.342 14:51:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.342 14:51:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.342 14:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:29.342 ************************************ 00:05:29.342 START TEST json_config_extra_key 00:05:29.342 ************************************ 00:05:29.342 14:51:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:29.601 14:51:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:29.601 14:51:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:29.601 14:51:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:29.601 14:51:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:29.601 14:51:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:29.601 14:51:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:29.601 14:51:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:29.601 14:51:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:29.601 14:51:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:29.601 14:51:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.601 14:51:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:29.601 14:51:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:29.601 14:51:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:29.601 14:51:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:29.601 14:51:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:29.601 14:51:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:29.601 14:51:53 -- scripts/common.sh@344 -- # : 1 00:05:29.601 14:51:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:29.601 14:51:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.601 14:51:53 -- scripts/common.sh@364 -- # decimal 1 00:05:29.601 14:51:53 -- scripts/common.sh@352 -- # local d=1 00:05:29.601 14:51:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.601 14:51:53 -- scripts/common.sh@354 -- # echo 1 00:05:29.601 14:51:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:29.601 14:51:53 -- scripts/common.sh@365 -- # decimal 2 00:05:29.601 14:51:53 -- scripts/common.sh@352 -- # local d=2 00:05:29.601 14:51:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.601 14:51:53 -- scripts/common.sh@354 -- # echo 2 00:05:29.601 14:51:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:29.601 14:51:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:29.601 14:51:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:29.601 14:51:53 -- scripts/common.sh@367 -- # return 0 00:05:29.601 14:51:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.601 14:51:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:29.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.601 --rc genhtml_branch_coverage=1 00:05:29.601 --rc genhtml_function_coverage=1 00:05:29.601 --rc genhtml_legend=1 00:05:29.601 --rc geninfo_all_blocks=1 00:05:29.601 --rc geninfo_unexecuted_blocks=1 00:05:29.601 00:05:29.601 ' 00:05:29.601 14:51:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:29.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.601 --rc genhtml_branch_coverage=1 00:05:29.601 --rc genhtml_function_coverage=1 00:05:29.601 --rc genhtml_legend=1 00:05:29.601 --rc geninfo_all_blocks=1 00:05:29.601 --rc geninfo_unexecuted_blocks=1 00:05:29.601 00:05:29.601 ' 00:05:29.601 14:51:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:29.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.601 --rc genhtml_branch_coverage=1 00:05:29.601 --rc genhtml_function_coverage=1 00:05:29.601 --rc genhtml_legend=1 00:05:29.601 --rc geninfo_all_blocks=1 00:05:29.601 --rc geninfo_unexecuted_blocks=1 00:05:29.601 00:05:29.601 ' 00:05:29.601 14:51:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:29.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.601 --rc genhtml_branch_coverage=1 00:05:29.601 --rc genhtml_function_coverage=1 00:05:29.601 --rc genhtml_legend=1 00:05:29.601 --rc geninfo_all_blocks=1 00:05:29.601 --rc geninfo_unexecuted_blocks=1 00:05:29.601 00:05:29.601 ' 00:05:29.601 14:51:53 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.601 14:51:53 -- nvmf/common.sh@7 -- # uname -s 00:05:29.601 14:51:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.601 14:51:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.601 14:51:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.601 14:51:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.601 14:51:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.601 14:51:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.601 14:51:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.601 14:51:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.601 14:51:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.601 14:51:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.601 14:51:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:05:29.601 14:51:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:05:29.601 14:51:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.601 14:51:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.601 14:51:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.601 14:51:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.601 14:51:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.601 14:51:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.601 14:51:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.601 14:51:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.601 14:51:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.601 14:51:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.601 14:51:53 -- paths/export.sh@5 -- # export PATH 00:05:29.601 14:51:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.601 14:51:53 -- nvmf/common.sh@46 -- # : 0 00:05:29.601 14:51:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:29.601 14:51:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:29.601 14:51:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:29.601 14:51:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.601 14:51:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.601 14:51:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:29.601 14:51:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:29.601 14:51:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:29.601 14:51:53 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:29.601 14:51:53 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:29.601 14:51:53 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:29.601 14:51:53 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:29.601 14:51:53 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:29.601 14:51:53 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:29.601 14:51:53 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.602 INFO: launching applications... 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68383 00:05:29.602 Waiting for target to run... 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:29.602 14:51:53 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68383 /var/tmp/spdk_tgt.sock 00:05:29.602 14:51:53 -- common/autotest_common.sh@829 -- # '[' -z 68383 ']' 00:05:29.602 14:51:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.602 14:51:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.602 14:51:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.602 14:51:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.602 14:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:29.602 [2024-11-18 14:51:53.162008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.602 [2024-11-18 14:51:53.162117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68383 ] 00:05:30.169 [2024-11-18 14:51:53.664904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.169 [2024-11-18 14:51:53.739885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.169 [2024-11-18 14:51:53.740035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.736 14:51:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.736 14:51:54 -- common/autotest_common.sh@862 -- # return 0 00:05:30.736 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:30.736 INFO: shutting down applications... 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68383 ]] 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68383 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68383 00:05:30.736 14:51:54 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:31.303 14:51:54 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:31.303 14:51:54 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:31.303 14:51:54 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68383 00:05:31.303 14:51:54 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:31.303 14:51:54 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:31.303 14:51:54 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:31.303 SPDK target shutdown done 00:05:31.303 14:51:54 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:31.303 Success 00:05:31.303 14:51:54 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:31.303 ************************************ 00:05:31.303 END TEST json_config_extra_key 00:05:31.303 ************************************ 00:05:31.303 00:05:31.303 real 0m1.756s 00:05:31.303 user 0m1.536s 00:05:31.303 sys 0m0.553s 00:05:31.303 14:51:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.303 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:31.303 14:51:54 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.303 14:51:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.303 14:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.303 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:31.303 ************************************ 00:05:31.303 START TEST alias_rpc 00:05:31.303 ************************************ 00:05:31.303 14:51:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.303 * Looking for test storage... 00:05:31.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:31.303 14:51:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.303 14:51:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.303 14:51:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.561 14:51:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.561 14:51:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.561 14:51:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.561 14:51:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.561 14:51:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.561 14:51:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.561 14:51:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.561 14:51:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.561 14:51:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.561 14:51:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.561 14:51:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.562 14:51:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.562 14:51:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.562 14:51:54 -- scripts/common.sh@344 -- # : 1 00:05:31.562 14:51:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.562 14:51:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.562 14:51:54 -- scripts/common.sh@364 -- # decimal 1 00:05:31.562 14:51:54 -- scripts/common.sh@352 -- # local d=1 00:05:31.562 14:51:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.562 14:51:54 -- scripts/common.sh@354 -- # echo 1 00:05:31.562 14:51:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:31.562 14:51:54 -- scripts/common.sh@365 -- # decimal 2 00:05:31.562 14:51:54 -- scripts/common.sh@352 -- # local d=2 00:05:31.562 14:51:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.562 14:51:54 -- scripts/common.sh@354 -- # echo 2 00:05:31.562 14:51:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:31.562 14:51:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:31.562 14:51:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:31.562 14:51:54 -- scripts/common.sh@367 -- # return 0 00:05:31.562 14:51:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.562 14:51:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.562 --rc genhtml_branch_coverage=1 00:05:31.562 --rc genhtml_function_coverage=1 00:05:31.562 --rc genhtml_legend=1 00:05:31.562 --rc geninfo_all_blocks=1 00:05:31.562 --rc geninfo_unexecuted_blocks=1 00:05:31.562 00:05:31.562 ' 00:05:31.562 14:51:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.562 --rc genhtml_branch_coverage=1 00:05:31.562 --rc genhtml_function_coverage=1 00:05:31.562 --rc genhtml_legend=1 00:05:31.562 --rc geninfo_all_blocks=1 00:05:31.562 --rc geninfo_unexecuted_blocks=1 00:05:31.562 00:05:31.562 ' 00:05:31.562 14:51:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.562 --rc genhtml_branch_coverage=1 00:05:31.562 --rc genhtml_function_coverage=1 00:05:31.562 --rc genhtml_legend=1 00:05:31.562 --rc geninfo_all_blocks=1 00:05:31.562 --rc geninfo_unexecuted_blocks=1 00:05:31.562 00:05:31.562 ' 00:05:31.562 14:51:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:31.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.562 --rc genhtml_branch_coverage=1 00:05:31.562 --rc genhtml_function_coverage=1 00:05:31.562 --rc genhtml_legend=1 00:05:31.562 --rc geninfo_all_blocks=1 00:05:31.562 --rc geninfo_unexecuted_blocks=1 00:05:31.562 00:05:31.562 ' 00:05:31.562 14:51:54 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.562 14:51:54 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68466 00:05:31.562 14:51:54 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68466 00:05:31.562 14:51:54 -- common/autotest_common.sh@829 -- # '[' -z 68466 ']' 00:05:31.562 14:51:54 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.562 14:51:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.562 14:51:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.562 14:51:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.562 14:51:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.562 14:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:31.562 [2024-11-18 14:51:54.983626] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.562 [2024-11-18 14:51:54.983721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68466 ] 00:05:31.562 [2024-11-18 14:51:55.122767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.820 [2024-11-18 14:51:55.187946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.820 [2024-11-18 14:51:55.188089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.754 14:51:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.754 14:51:55 -- common/autotest_common.sh@862 -- # return 0 00:05:32.754 14:51:55 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:32.754 14:51:56 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68466 00:05:32.754 14:51:56 -- common/autotest_common.sh@936 -- # '[' -z 68466 ']' 00:05:32.754 14:51:56 -- common/autotest_common.sh@940 -- # kill -0 68466 00:05:32.754 14:51:56 -- common/autotest_common.sh@941 -- # uname 00:05:32.754 14:51:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.754 14:51:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68466 00:05:32.754 14:51:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.754 14:51:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.754 killing process with pid 68466 00:05:32.754 14:51:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68466' 00:05:32.754 14:51:56 -- common/autotest_common.sh@955 -- # kill 68466 00:05:32.754 14:51:56 -- common/autotest_common.sh@960 -- # wait 68466 00:05:33.321 ************************************ 00:05:33.321 END TEST alias_rpc 00:05:33.321 ************************************ 00:05:33.321 00:05:33.321 real 0m1.886s 00:05:33.321 user 0m2.183s 00:05:33.321 sys 0m0.431s 00:05:33.321 14:51:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.321 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:33.321 14:51:56 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:33.321 14:51:56 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:33.321 14:51:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.321 14:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.321 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:33.321 ************************************ 00:05:33.321 START TEST dpdk_mem_utility 00:05:33.321 ************************************ 00:05:33.321 14:51:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:33.321 * Looking for test storage... 00:05:33.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:33.321 14:51:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:33.321 14:51:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:33.321 14:51:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.321 14:51:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.321 14:51:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.321 14:51:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.321 14:51:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.321 14:51:56 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.321 14:51:56 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.321 14:51:56 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.321 14:51:56 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.321 14:51:56 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.321 14:51:56 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.321 14:51:56 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.321 14:51:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.321 14:51:56 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.321 14:51:56 -- scripts/common.sh@344 -- # : 1 00:05:33.321 14:51:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.321 14:51:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.321 14:51:56 -- scripts/common.sh@364 -- # decimal 1 00:05:33.321 14:51:56 -- scripts/common.sh@352 -- # local d=1 00:05:33.321 14:51:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.321 14:51:56 -- scripts/common.sh@354 -- # echo 1 00:05:33.321 14:51:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.321 14:51:56 -- scripts/common.sh@365 -- # decimal 2 00:05:33.321 14:51:56 -- scripts/common.sh@352 -- # local d=2 00:05:33.321 14:51:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.321 14:51:56 -- scripts/common.sh@354 -- # echo 2 00:05:33.321 14:51:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.321 14:51:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.321 14:51:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.321 14:51:56 -- scripts/common.sh@367 -- # return 0 00:05:33.321 14:51:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.321 14:51:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.321 --rc genhtml_branch_coverage=1 00:05:33.321 --rc genhtml_function_coverage=1 00:05:33.321 --rc genhtml_legend=1 00:05:33.321 --rc geninfo_all_blocks=1 00:05:33.321 --rc geninfo_unexecuted_blocks=1 00:05:33.321 00:05:33.321 ' 00:05:33.321 14:51:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.321 --rc genhtml_branch_coverage=1 00:05:33.321 --rc genhtml_function_coverage=1 00:05:33.321 --rc genhtml_legend=1 00:05:33.321 --rc geninfo_all_blocks=1 00:05:33.321 --rc geninfo_unexecuted_blocks=1 00:05:33.321 00:05:33.321 ' 00:05:33.321 14:51:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.321 --rc genhtml_branch_coverage=1 00:05:33.321 --rc genhtml_function_coverage=1 00:05:33.321 --rc genhtml_legend=1 00:05:33.321 --rc geninfo_all_blocks=1 00:05:33.321 --rc geninfo_unexecuted_blocks=1 00:05:33.321 00:05:33.321 ' 00:05:33.321 14:51:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.321 --rc genhtml_branch_coverage=1 00:05:33.321 --rc genhtml_function_coverage=1 00:05:33.321 --rc genhtml_legend=1 00:05:33.321 --rc geninfo_all_blocks=1 00:05:33.321 --rc geninfo_unexecuted_blocks=1 00:05:33.321 00:05:33.321 ' 00:05:33.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.321 14:51:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:33.321 14:51:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68565 00:05:33.321 14:51:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68565 00:05:33.321 14:51:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.321 14:51:56 -- common/autotest_common.sh@829 -- # '[' -z 68565 ']' 00:05:33.321 14:51:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.321 14:51:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.321 14:51:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.321 14:51:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.321 14:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:33.580 [2024-11-18 14:51:56.925441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.580 [2024-11-18 14:51:56.925549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68565 ] 00:05:33.580 [2024-11-18 14:51:57.065788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.580 [2024-11-18 14:51:57.122208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.580 [2024-11-18 14:51:57.122394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.519 14:51:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.519 14:51:57 -- common/autotest_common.sh@862 -- # return 0 00:05:34.519 14:51:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:34.519 14:51:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:34.519 14:51:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.519 14:51:57 -- common/autotest_common.sh@10 -- # set +x 00:05:34.519 { 00:05:34.519 "filename": "/tmp/spdk_mem_dump.txt" 00:05:34.519 } 00:05:34.519 14:51:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.519 14:51:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:34.519 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:34.519 1 heaps totaling size 814.000000 MiB 00:05:34.519 size: 814.000000 MiB heap id: 0 00:05:34.519 end heaps---------- 00:05:34.519 8 mempools totaling size 598.116089 MiB 00:05:34.519 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:34.519 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:34.519 size: 84.521057 MiB name: bdev_io_68565 00:05:34.519 size: 51.011292 MiB name: evtpool_68565 00:05:34.519 size: 50.003479 MiB name: msgpool_68565 00:05:34.519 size: 21.763794 MiB name: PDU_Pool 00:05:34.519 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:34.519 size: 0.026123 MiB name: Session_Pool 00:05:34.519 end mempools------- 00:05:34.519 6 memzones totaling size 4.142822 MiB 00:05:34.519 size: 1.000366 MiB name: RG_ring_0_68565 00:05:34.519 size: 1.000366 MiB name: RG_ring_1_68565 00:05:34.519 size: 1.000366 MiB name: RG_ring_4_68565 00:05:34.519 size: 1.000366 MiB name: RG_ring_5_68565 00:05:34.519 size: 0.125366 MiB name: RG_ring_2_68565 00:05:34.519 size: 0.015991 MiB name: RG_ring_3_68565 00:05:34.519 end memzones------- 00:05:34.519 14:51:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:34.519 heap id: 0 total size: 814.000000 MiB number of busy elements: 207 number of free elements: 15 00:05:34.519 list of free elements. size: 12.488953 MiB 00:05:34.519 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:34.519 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:34.519 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:34.519 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:34.519 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:34.519 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:34.519 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:34.519 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:34.519 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:34.519 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:34.519 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:34.519 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:34.519 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:34.519 element at address: 0x200027e00000 with size: 0.399597 MiB 00:05:34.519 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:34.519 list of standard malloc elements. size: 199.248474 MiB 00:05:34.519 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:34.519 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:34.519 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:34.519 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:34.519 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:34.519 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:34.519 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:34.519 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:34.519 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:34.519 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:34.519 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:34.520 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e664c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e66580 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d180 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:34.520 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:34.521 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:34.521 list of memzone associated elements. size: 602.262573 MiB 00:05:34.521 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:34.521 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:34.521 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:34.521 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:34.521 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:34.521 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68565_0 00:05:34.521 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:34.521 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68565_0 00:05:34.521 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:34.521 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68565_0 00:05:34.521 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:34.521 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:34.521 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:34.521 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:34.521 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:34.521 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68565 00:05:34.521 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:34.521 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68565 00:05:34.521 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:34.521 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68565 00:05:34.521 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:34.521 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:34.521 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:34.521 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:34.521 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:34.521 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:34.521 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:34.521 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:34.521 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:34.521 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68565 00:05:34.521 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:34.521 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68565 00:05:34.521 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:34.521 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68565 00:05:34.521 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:34.521 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68565 00:05:34.521 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:34.521 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68565 00:05:34.521 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:34.521 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:34.521 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:34.521 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:34.521 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:34.521 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:34.521 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:34.521 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68565 00:05:34.521 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:34.521 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:34.521 element at address: 0x200027e66640 with size: 0.023743 MiB 00:05:34.521 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:34.521 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:34.521 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68565 00:05:34.521 element at address: 0x200027e6c780 with size: 0.002441 MiB 00:05:34.521 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:34.521 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:34.521 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68565 00:05:34.521 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:34.521 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68565 00:05:34.521 element at address: 0x200027e6d240 with size: 0.000305 MiB 00:05:34.521 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:34.521 14:51:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:34.521 14:51:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68565 00:05:34.521 14:51:58 -- common/autotest_common.sh@936 -- # '[' -z 68565 ']' 00:05:34.521 14:51:58 -- common/autotest_common.sh@940 -- # kill -0 68565 00:05:34.521 14:51:58 -- common/autotest_common.sh@941 -- # uname 00:05:34.521 14:51:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.521 14:51:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68565 00:05:34.780 killing process with pid 68565 00:05:34.780 14:51:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.780 14:51:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.780 14:51:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68565' 00:05:34.780 14:51:58 -- common/autotest_common.sh@955 -- # kill 68565 00:05:34.780 14:51:58 -- common/autotest_common.sh@960 -- # wait 68565 00:05:35.039 00:05:35.039 real 0m1.812s 00:05:35.039 user 0m1.995s 00:05:35.039 sys 0m0.452s 00:05:35.039 14:51:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.039 14:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.039 ************************************ 00:05:35.039 END TEST dpdk_mem_utility 00:05:35.039 ************************************ 00:05:35.039 14:51:58 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.039 14:51:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.039 14:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.039 14:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.039 ************************************ 00:05:35.039 START TEST event 00:05:35.039 ************************************ 00:05:35.039 14:51:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.039 * Looking for test storage... 00:05:35.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.039 14:51:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:35.039 14:51:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:35.298 14:51:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.298 14:51:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.298 14:51:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.298 14:51:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.298 14:51:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.298 14:51:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.298 14:51:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.298 14:51:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.298 14:51:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.298 14:51:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.298 14:51:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.298 14:51:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.298 14:51:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.298 14:51:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.298 14:51:58 -- scripts/common.sh@344 -- # : 1 00:05:35.298 14:51:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.298 14:51:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.298 14:51:58 -- scripts/common.sh@364 -- # decimal 1 00:05:35.298 14:51:58 -- scripts/common.sh@352 -- # local d=1 00:05:35.298 14:51:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.298 14:51:58 -- scripts/common.sh@354 -- # echo 1 00:05:35.298 14:51:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.298 14:51:58 -- scripts/common.sh@365 -- # decimal 2 00:05:35.298 14:51:58 -- scripts/common.sh@352 -- # local d=2 00:05:35.298 14:51:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.298 14:51:58 -- scripts/common.sh@354 -- # echo 2 00:05:35.298 14:51:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.298 14:51:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.298 14:51:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.298 14:51:58 -- scripts/common.sh@367 -- # return 0 00:05:35.298 14:51:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.298 14:51:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.298 --rc genhtml_branch_coverage=1 00:05:35.298 --rc genhtml_function_coverage=1 00:05:35.298 --rc genhtml_legend=1 00:05:35.298 --rc geninfo_all_blocks=1 00:05:35.298 --rc geninfo_unexecuted_blocks=1 00:05:35.298 00:05:35.298 ' 00:05:35.298 14:51:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.298 --rc genhtml_branch_coverage=1 00:05:35.298 --rc genhtml_function_coverage=1 00:05:35.298 --rc genhtml_legend=1 00:05:35.298 --rc geninfo_all_blocks=1 00:05:35.298 --rc geninfo_unexecuted_blocks=1 00:05:35.298 00:05:35.298 ' 00:05:35.298 14:51:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.298 --rc genhtml_branch_coverage=1 00:05:35.298 --rc genhtml_function_coverage=1 00:05:35.298 --rc genhtml_legend=1 00:05:35.298 --rc geninfo_all_blocks=1 00:05:35.298 --rc geninfo_unexecuted_blocks=1 00:05:35.298 00:05:35.298 ' 00:05:35.298 14:51:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.298 --rc genhtml_branch_coverage=1 00:05:35.298 --rc genhtml_function_coverage=1 00:05:35.298 --rc genhtml_legend=1 00:05:35.298 --rc geninfo_all_blocks=1 00:05:35.298 --rc geninfo_unexecuted_blocks=1 00:05:35.298 00:05:35.298 ' 00:05:35.298 14:51:58 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:35.298 14:51:58 -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.298 14:51:58 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.298 14:51:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:35.298 14:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.298 14:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.298 ************************************ 00:05:35.298 START TEST event_perf 00:05:35.298 ************************************ 00:05:35.298 14:51:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.298 Running I/O for 1 seconds...[2024-11-18 14:51:58.754576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.298 [2024-11-18 14:51:58.754677] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68667 ] 00:05:35.557 [2024-11-18 14:51:58.892537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.557 [2024-11-18 14:51:58.948700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.557 [2024-11-18 14:51:58.948843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.557 [2024-11-18 14:51:58.948988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.557 [2024-11-18 14:51:58.948989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.491 Running I/O for 1 seconds... 00:05:36.491 lcore 0: 131775 00:05:36.492 lcore 1: 131776 00:05:36.492 lcore 2: 131776 00:05:36.492 lcore 3: 131775 00:05:36.492 done. 00:05:36.492 00:05:36.492 real 0m1.288s 00:05:36.492 user 0m4.105s 00:05:36.492 sys 0m0.063s 00:05:36.492 14:52:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.492 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:36.492 ************************************ 00:05:36.492 END TEST event_perf 00:05:36.492 ************************************ 00:05:36.492 14:52:00 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:36.492 14:52:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:36.492 14:52:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.492 14:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:36.750 ************************************ 00:05:36.750 START TEST event_reactor 00:05:36.750 ************************************ 00:05:36.750 14:52:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:36.750 [2024-11-18 14:52:00.097184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.750 [2024-11-18 14:52:00.097267] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68700 ] 00:05:36.750 [2024-11-18 14:52:00.228004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.750 [2024-11-18 14:52:00.303985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.128 test_start 00:05:38.128 oneshot 00:05:38.128 tick 100 00:05:38.128 tick 100 00:05:38.128 tick 250 00:05:38.128 tick 100 00:05:38.128 tick 100 00:05:38.128 tick 100 00:05:38.128 tick 250 00:05:38.128 tick 500 00:05:38.128 tick 100 00:05:38.128 tick 100 00:05:38.128 tick 250 00:05:38.128 tick 100 00:05:38.128 tick 100 00:05:38.128 test_end 00:05:38.128 00:05:38.128 real 0m1.281s 00:05:38.128 user 0m1.118s 00:05:38.128 sys 0m0.057s 00:05:38.128 14:52:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.128 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.128 ************************************ 00:05:38.128 END TEST event_reactor 00:05:38.128 ************************************ 00:05:38.128 14:52:01 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.128 14:52:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:38.128 14:52:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.128 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.128 ************************************ 00:05:38.128 START TEST event_reactor_perf 00:05:38.128 ************************************ 00:05:38.128 14:52:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.128 [2024-11-18 14:52:01.435869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.128 [2024-11-18 14:52:01.435978] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68741 ] 00:05:38.128 [2024-11-18 14:52:01.572532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.128 [2024-11-18 14:52:01.626339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.505 test_start 00:05:39.505 test_end 00:05:39.505 Performance: 472713 events per second 00:05:39.505 00:05:39.505 real 0m1.257s 00:05:39.505 user 0m1.099s 00:05:39.505 sys 0m0.052s 00:05:39.505 14:52:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.505 ************************************ 00:05:39.505 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.505 END TEST event_reactor_perf 00:05:39.505 ************************************ 00:05:39.505 14:52:02 -- event/event.sh@49 -- # uname -s 00:05:39.505 14:52:02 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:39.505 14:52:02 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:39.505 14:52:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.505 14:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.505 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.505 ************************************ 00:05:39.505 START TEST event_scheduler 00:05:39.505 ************************************ 00:05:39.505 14:52:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:39.505 * Looking for test storage... 00:05:39.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:39.505 14:52:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.505 14:52:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.505 14:52:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.505 14:52:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.505 14:52:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.505 14:52:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.505 14:52:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.505 14:52:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.505 14:52:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.505 14:52:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.505 14:52:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.505 14:52:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.505 14:52:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.505 14:52:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.505 14:52:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.505 14:52:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.505 14:52:02 -- scripts/common.sh@344 -- # : 1 00:05:39.505 14:52:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.505 14:52:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.505 14:52:02 -- scripts/common.sh@364 -- # decimal 1 00:05:39.505 14:52:02 -- scripts/common.sh@352 -- # local d=1 00:05:39.505 14:52:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.505 14:52:02 -- scripts/common.sh@354 -- # echo 1 00:05:39.505 14:52:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.505 14:52:02 -- scripts/common.sh@365 -- # decimal 2 00:05:39.505 14:52:02 -- scripts/common.sh@352 -- # local d=2 00:05:39.505 14:52:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.505 14:52:02 -- scripts/common.sh@354 -- # echo 2 00:05:39.505 14:52:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.505 14:52:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.505 14:52:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.505 14:52:02 -- scripts/common.sh@367 -- # return 0 00:05:39.505 14:52:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.505 14:52:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.505 --rc genhtml_branch_coverage=1 00:05:39.505 --rc genhtml_function_coverage=1 00:05:39.505 --rc genhtml_legend=1 00:05:39.505 --rc geninfo_all_blocks=1 00:05:39.505 --rc geninfo_unexecuted_blocks=1 00:05:39.505 00:05:39.505 ' 00:05:39.505 14:52:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.505 --rc genhtml_branch_coverage=1 00:05:39.505 --rc genhtml_function_coverage=1 00:05:39.505 --rc genhtml_legend=1 00:05:39.505 --rc geninfo_all_blocks=1 00:05:39.505 --rc geninfo_unexecuted_blocks=1 00:05:39.505 00:05:39.505 ' 00:05:39.505 14:52:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.505 --rc genhtml_branch_coverage=1 00:05:39.505 --rc genhtml_function_coverage=1 00:05:39.505 --rc genhtml_legend=1 00:05:39.505 --rc geninfo_all_blocks=1 00:05:39.505 --rc geninfo_unexecuted_blocks=1 00:05:39.505 00:05:39.505 ' 00:05:39.505 14:52:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.505 --rc genhtml_branch_coverage=1 00:05:39.505 --rc genhtml_function_coverage=1 00:05:39.505 --rc genhtml_legend=1 00:05:39.505 --rc geninfo_all_blocks=1 00:05:39.505 --rc geninfo_unexecuted_blocks=1 00:05:39.505 00:05:39.505 ' 00:05:39.505 14:52:02 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:39.505 14:52:02 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68804 00:05:39.505 14:52:02 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:39.505 14:52:02 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.505 14:52:02 -- scheduler/scheduler.sh@37 -- # waitforlisten 68804 00:05:39.505 14:52:02 -- common/autotest_common.sh@829 -- # '[' -z 68804 ']' 00:05:39.505 14:52:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.505 14:52:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.505 14:52:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.505 14:52:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.505 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.505 [2024-11-18 14:52:02.981283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.505 [2024-11-18 14:52:02.981396] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68804 ] 00:05:39.763 [2024-11-18 14:52:03.125993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.763 [2024-11-18 14:52:03.197852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.763 [2024-11-18 14:52:03.197894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.763 [2024-11-18 14:52:03.198034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.763 [2024-11-18 14:52:03.198041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.700 14:52:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.700 14:52:03 -- common/autotest_common.sh@862 -- # return 0 00:05:40.700 14:52:03 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:40.700 14:52:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:03 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 POWER: Env isn't set yet! 00:05:40.700 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:40.700 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:40.700 POWER: Cannot set governor of lcore 0 to userspace 00:05:40.700 POWER: Attempting to initialise PSTAT power management... 00:05:40.700 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:40.700 POWER: Cannot set governor of lcore 0 to performance 00:05:40.700 POWER: Attempting to initialise AMD PSTATE power management... 00:05:40.700 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:40.700 POWER: Cannot set governor of lcore 0 to userspace 00:05:40.700 POWER: Attempting to initialise CPPC power management... 00:05:40.700 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:40.700 POWER: Cannot set governor of lcore 0 to userspace 00:05:40.700 POWER: Attempting to initialise VM power management... 00:05:40.700 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:40.700 POWER: Unable to set Power Management Environment for lcore 0 00:05:40.700 [2024-11-18 14:52:03.944091] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:40.700 [2024-11-18 14:52:03.944102] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:40.700 [2024-11-18 14:52:03.944110] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:40.700 [2024-11-18 14:52:03.944120] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:40.700 [2024-11-18 14:52:03.944127] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:40.700 [2024-11-18 14:52:03.944133] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:40.700 14:52:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:03 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:40.700 14:52:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:03 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 [2024-11-18 14:52:04.029337] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:40.700 14:52:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.700 14:52:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 ************************************ 00:05:40.700 START TEST scheduler_create_thread 00:05:40.700 ************************************ 00:05:40.700 14:52:04 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 2 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 3 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 4 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 5 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 6 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 7 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 8 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 9 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 10 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 14:52:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.700 14:52:04 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:40.700 14:52:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.700 14:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:42.076 14:52:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.076 14:52:05 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.076 14:52:05 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.076 14:52:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.076 14:52:05 -- common/autotest_common.sh@10 -- # set +x 00:05:43.453 ************************************ 00:05:43.453 END TEST scheduler_create_thread 00:05:43.453 ************************************ 00:05:43.453 14:52:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.453 00:05:43.453 real 0m2.615s 00:05:43.453 user 0m0.016s 00:05:43.453 sys 0m0.008s 00:05:43.453 14:52:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.453 14:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:43.453 14:52:06 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.453 14:52:06 -- scheduler/scheduler.sh@46 -- # killprocess 68804 00:05:43.453 14:52:06 -- common/autotest_common.sh@936 -- # '[' -z 68804 ']' 00:05:43.453 14:52:06 -- common/autotest_common.sh@940 -- # kill -0 68804 00:05:43.453 14:52:06 -- common/autotest_common.sh@941 -- # uname 00:05:43.453 14:52:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.453 14:52:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68804 00:05:43.453 killing process with pid 68804 00:05:43.453 14:52:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:43.453 14:52:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:43.453 14:52:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68804' 00:05:43.453 14:52:06 -- common/autotest_common.sh@955 -- # kill 68804 00:05:43.453 14:52:06 -- common/autotest_common.sh@960 -- # wait 68804 00:05:43.711 [2024-11-18 14:52:07.137009] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:43.971 ************************************ 00:05:43.971 END TEST event_scheduler 00:05:43.971 ************************************ 00:05:43.971 00:05:43.971 real 0m4.664s 00:05:43.971 user 0m8.717s 00:05:43.971 sys 0m0.423s 00:05:43.971 14:52:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.971 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:43.971 14:52:07 -- event/event.sh@51 -- # modprobe -n nbd 00:05:43.971 14:52:07 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:43.971 14:52:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.971 14:52:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.971 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:43.971 ************************************ 00:05:43.971 START TEST app_repeat 00:05:43.971 ************************************ 00:05:43.971 14:52:07 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:43.971 14:52:07 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.971 14:52:07 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.971 14:52:07 -- event/event.sh@13 -- # local nbd_list 00:05:43.971 14:52:07 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.971 14:52:07 -- event/event.sh@14 -- # local bdev_list 00:05:43.971 14:52:07 -- event/event.sh@15 -- # local repeat_times=4 00:05:43.971 14:52:07 -- event/event.sh@17 -- # modprobe nbd 00:05:43.971 Process app_repeat pid: 68922 00:05:43.971 14:52:07 -- event/event.sh@19 -- # repeat_pid=68922 00:05:43.971 14:52:07 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.971 14:52:07 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:43.971 14:52:07 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68922' 00:05:43.971 spdk_app_start Round 0 00:05:43.971 14:52:07 -- event/event.sh@23 -- # for i in {0..2} 00:05:43.971 14:52:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:43.971 14:52:07 -- event/event.sh@25 -- # waitforlisten 68922 /var/tmp/spdk-nbd.sock 00:05:43.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.971 14:52:07 -- common/autotest_common.sh@829 -- # '[' -z 68922 ']' 00:05:43.971 14:52:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.971 14:52:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.971 14:52:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.971 14:52:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.971 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:43.971 [2024-11-18 14:52:07.486975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.971 [2024-11-18 14:52:07.487244] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68922 ] 00:05:44.229 [2024-11-18 14:52:07.625075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.229 [2024-11-18 14:52:07.683618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.229 [2024-11-18 14:52:07.683635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.164 14:52:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.164 14:52:08 -- common/autotest_common.sh@862 -- # return 0 00:05:45.164 14:52:08 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.164 Malloc0 00:05:45.164 14:52:08 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.423 Malloc1 00:05:45.423 14:52:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@12 -- # local i 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.423 14:52:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.682 /dev/nbd0 00:05:45.941 14:52:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.941 14:52:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.941 14:52:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:45.941 14:52:09 -- common/autotest_common.sh@867 -- # local i 00:05:45.941 14:52:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.941 14:52:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.941 14:52:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:45.941 14:52:09 -- common/autotest_common.sh@871 -- # break 00:05:45.941 14:52:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.941 14:52:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.941 14:52:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.941 1+0 records in 00:05:45.941 1+0 records out 00:05:45.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230964 s, 17.7 MB/s 00:05:45.941 14:52:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.941 14:52:09 -- common/autotest_common.sh@884 -- # size=4096 00:05:45.941 14:52:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.941 14:52:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.941 14:52:09 -- common/autotest_common.sh@887 -- # return 0 00:05:45.941 14:52:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.941 14:52:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.941 14:52:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.199 /dev/nbd1 00:05:46.199 14:52:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.199 14:52:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.199 14:52:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.199 14:52:09 -- common/autotest_common.sh@867 -- # local i 00:05:46.199 14:52:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.199 14:52:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.199 14:52:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.200 14:52:09 -- common/autotest_common.sh@871 -- # break 00:05:46.200 14:52:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.200 14:52:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.200 14:52:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.200 1+0 records in 00:05:46.200 1+0 records out 00:05:46.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300262 s, 13.6 MB/s 00:05:46.200 14:52:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.200 14:52:09 -- common/autotest_common.sh@884 -- # size=4096 00:05:46.200 14:52:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.200 14:52:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.200 14:52:09 -- common/autotest_common.sh@887 -- # return 0 00:05:46.200 14:52:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.200 14:52:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.200 14:52:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.200 14:52:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.200 14:52:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.458 { 00:05:46.458 "bdev_name": "Malloc0", 00:05:46.458 "nbd_device": "/dev/nbd0" 00:05:46.458 }, 00:05:46.458 { 00:05:46.458 "bdev_name": "Malloc1", 00:05:46.458 "nbd_device": "/dev/nbd1" 00:05:46.458 } 00:05:46.458 ]' 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.458 { 00:05:46.458 "bdev_name": "Malloc0", 00:05:46.458 "nbd_device": "/dev/nbd0" 00:05:46.458 }, 00:05:46.458 { 00:05:46.458 "bdev_name": "Malloc1", 00:05:46.458 "nbd_device": "/dev/nbd1" 00:05:46.458 } 00:05:46.458 ]' 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.458 /dev/nbd1' 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.458 /dev/nbd1' 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.458 256+0 records in 00:05:46.458 256+0 records out 00:05:46.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00734836 s, 143 MB/s 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.458 256+0 records in 00:05:46.458 256+0 records out 00:05:46.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256478 s, 40.9 MB/s 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.458 14:52:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.458 256+0 records in 00:05:46.458 256+0 records out 00:05:46.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299717 s, 35.0 MB/s 00:05:46.458 14:52:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.458 14:52:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.458 14:52:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.458 14:52:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@51 -- # local i 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.459 14:52:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@41 -- # break 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@41 -- # break 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.026 14:52:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.285 14:52:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.285 14:52:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.285 14:52:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@65 -- # true 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.544 14:52:10 -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.544 14:52:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.803 14:52:11 -- event/event.sh@35 -- # sleep 3 00:05:47.803 [2024-11-18 14:52:11.355163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.062 [2024-11-18 14:52:11.397575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.062 [2024-11-18 14:52:11.397593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.062 [2024-11-18 14:52:11.449545] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.062 [2024-11-18 14:52:11.449606] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.350 14:52:14 -- event/event.sh@23 -- # for i in {0..2} 00:05:51.350 spdk_app_start Round 1 00:05:51.350 14:52:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:51.350 14:52:14 -- event/event.sh@25 -- # waitforlisten 68922 /var/tmp/spdk-nbd.sock 00:05:51.350 14:52:14 -- common/autotest_common.sh@829 -- # '[' -z 68922 ']' 00:05:51.350 14:52:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.350 14:52:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.350 14:52:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.350 14:52:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.350 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:51.350 14:52:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.350 14:52:14 -- common/autotest_common.sh@862 -- # return 0 00:05:51.350 14:52:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.350 Malloc0 00:05:51.350 14:52:14 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.609 Malloc1 00:05:51.609 14:52:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@12 -- # local i 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.609 14:52:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.609 /dev/nbd0 00:05:51.868 14:52:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.868 14:52:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.868 14:52:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:51.868 14:52:15 -- common/autotest_common.sh@867 -- # local i 00:05:51.868 14:52:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:51.868 14:52:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:51.868 14:52:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:51.868 14:52:15 -- common/autotest_common.sh@871 -- # break 00:05:51.868 14:52:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:51.868 14:52:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:51.868 14:52:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.868 1+0 records in 00:05:51.868 1+0 records out 00:05:51.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269056 s, 15.2 MB/s 00:05:51.868 14:52:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.868 14:52:15 -- common/autotest_common.sh@884 -- # size=4096 00:05:51.868 14:52:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.868 14:52:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:51.868 14:52:15 -- common/autotest_common.sh@887 -- # return 0 00:05:51.868 14:52:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.868 14:52:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.868 14:52:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.127 /dev/nbd1 00:05:52.127 14:52:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.127 14:52:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.127 14:52:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.127 14:52:15 -- common/autotest_common.sh@867 -- # local i 00:05:52.127 14:52:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.127 14:52:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.127 14:52:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.128 14:52:15 -- common/autotest_common.sh@871 -- # break 00:05:52.128 14:52:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.128 14:52:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.128 14:52:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.128 1+0 records in 00:05:52.128 1+0 records out 00:05:52.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355512 s, 11.5 MB/s 00:05:52.128 14:52:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.128 14:52:15 -- common/autotest_common.sh@884 -- # size=4096 00:05:52.128 14:52:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.128 14:52:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.128 14:52:15 -- common/autotest_common.sh@887 -- # return 0 00:05:52.128 14:52:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.128 14:52:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.128 14:52:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.128 14:52:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.128 14:52:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.386 14:52:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.386 { 00:05:52.386 "bdev_name": "Malloc0", 00:05:52.386 "nbd_device": "/dev/nbd0" 00:05:52.386 }, 00:05:52.386 { 00:05:52.386 "bdev_name": "Malloc1", 00:05:52.386 "nbd_device": "/dev/nbd1" 00:05:52.386 } 00:05:52.386 ]' 00:05:52.386 14:52:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.386 { 00:05:52.387 "bdev_name": "Malloc0", 00:05:52.387 "nbd_device": "/dev/nbd0" 00:05:52.387 }, 00:05:52.387 { 00:05:52.387 "bdev_name": "Malloc1", 00:05:52.387 "nbd_device": "/dev/nbd1" 00:05:52.387 } 00:05:52.387 ]' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.387 /dev/nbd1' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.387 /dev/nbd1' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.387 256+0 records in 00:05:52.387 256+0 records out 00:05:52.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00910389 s, 115 MB/s 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.387 256+0 records in 00:05:52.387 256+0 records out 00:05:52.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025246 s, 41.5 MB/s 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.387 256+0 records in 00:05:52.387 256+0 records out 00:05:52.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293678 s, 35.7 MB/s 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@51 -- # local i 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.387 14:52:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@41 -- # break 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.645 14:52:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@41 -- # break 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.904 14:52:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@65 -- # true 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.163 14:52:16 -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.163 14:52:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.422 14:52:16 -- event/event.sh@35 -- # sleep 3 00:05:53.681 [2024-11-18 14:52:17.106206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.681 [2024-11-18 14:52:17.148839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.681 [2024-11-18 14:52:17.148859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.681 [2024-11-18 14:52:17.200593] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.681 [2024-11-18 14:52:17.200664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.967 14:52:19 -- event/event.sh@23 -- # for i in {0..2} 00:05:56.967 spdk_app_start Round 2 00:05:56.967 14:52:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:56.967 14:52:19 -- event/event.sh@25 -- # waitforlisten 68922 /var/tmp/spdk-nbd.sock 00:05:56.967 14:52:19 -- common/autotest_common.sh@829 -- # '[' -z 68922 ']' 00:05:56.967 14:52:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.967 14:52:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.967 14:52:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.967 14:52:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.967 14:52:19 -- common/autotest_common.sh@10 -- # set +x 00:05:56.967 14:52:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.967 14:52:20 -- common/autotest_common.sh@862 -- # return 0 00:05:56.967 14:52:20 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.967 Malloc0 00:05:56.967 14:52:20 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.226 Malloc1 00:05:57.226 14:52:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@12 -- # local i 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.226 14:52:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.485 /dev/nbd0 00:05:57.744 14:52:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.744 14:52:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.744 14:52:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:57.745 14:52:21 -- common/autotest_common.sh@867 -- # local i 00:05:57.745 14:52:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:57.745 14:52:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:57.745 14:52:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:57.745 14:52:21 -- common/autotest_common.sh@871 -- # break 00:05:57.745 14:52:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:57.745 14:52:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:57.745 14:52:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.745 1+0 records in 00:05:57.745 1+0 records out 00:05:57.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003311 s, 12.4 MB/s 00:05:57.745 14:52:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.745 14:52:21 -- common/autotest_common.sh@884 -- # size=4096 00:05:57.745 14:52:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.745 14:52:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:57.745 14:52:21 -- common/autotest_common.sh@887 -- # return 0 00:05:57.745 14:52:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.745 14:52:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.745 14:52:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.745 /dev/nbd1 00:05:57.745 14:52:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.745 14:52:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.745 14:52:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:57.745 14:52:21 -- common/autotest_common.sh@867 -- # local i 00:05:57.745 14:52:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:57.745 14:52:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:57.745 14:52:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:57.745 14:52:21 -- common/autotest_common.sh@871 -- # break 00:05:57.745 14:52:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:57.745 14:52:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:57.745 14:52:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.745 1+0 records in 00:05:57.745 1+0 records out 00:05:57.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335298 s, 12.2 MB/s 00:05:57.745 14:52:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.745 14:52:21 -- common/autotest_common.sh@884 -- # size=4096 00:05:57.745 14:52:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.004 14:52:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.004 14:52:21 -- common/autotest_common.sh@887 -- # return 0 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.004 { 00:05:58.004 "bdev_name": "Malloc0", 00:05:58.004 "nbd_device": "/dev/nbd0" 00:05:58.004 }, 00:05:58.004 { 00:05:58.004 "bdev_name": "Malloc1", 00:05:58.004 "nbd_device": "/dev/nbd1" 00:05:58.004 } 00:05:58.004 ]' 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.004 { 00:05:58.004 "bdev_name": "Malloc0", 00:05:58.004 "nbd_device": "/dev/nbd0" 00:05:58.004 }, 00:05:58.004 { 00:05:58.004 "bdev_name": "Malloc1", 00:05:58.004 "nbd_device": "/dev/nbd1" 00:05:58.004 } 00:05:58.004 ]' 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.004 /dev/nbd1' 00:05:58.004 14:52:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.004 /dev/nbd1' 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.263 256+0 records in 00:05:58.263 256+0 records out 00:05:58.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00842398 s, 124 MB/s 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.263 256+0 records in 00:05:58.263 256+0 records out 00:05:58.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241546 s, 43.4 MB/s 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.263 256+0 records in 00:05:58.263 256+0 records out 00:05:58.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343759 s, 30.5 MB/s 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@51 -- # local i 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.263 14:52:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@41 -- # break 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.522 14:52:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@41 -- # break 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.781 14:52:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@65 -- # true 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.040 14:52:22 -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.040 14:52:22 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.298 14:52:22 -- event/event.sh@35 -- # sleep 3 00:05:59.298 [2024-11-18 14:52:22.870909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.571 [2024-11-18 14:52:22.913712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.571 [2024-11-18 14:52:22.913730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.572 [2024-11-18 14:52:22.965697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.572 [2024-11-18 14:52:22.965769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.888 14:52:25 -- event/event.sh@38 -- # waitforlisten 68922 /var/tmp/spdk-nbd.sock 00:06:02.888 14:52:25 -- common/autotest_common.sh@829 -- # '[' -z 68922 ']' 00:06:02.888 14:52:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.888 14:52:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.888 14:52:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.888 14:52:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.888 14:52:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.888 14:52:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.888 14:52:25 -- common/autotest_common.sh@862 -- # return 0 00:06:02.888 14:52:25 -- event/event.sh@39 -- # killprocess 68922 00:06:02.888 14:52:25 -- common/autotest_common.sh@936 -- # '[' -z 68922 ']' 00:06:02.888 14:52:25 -- common/autotest_common.sh@940 -- # kill -0 68922 00:06:02.888 14:52:25 -- common/autotest_common.sh@941 -- # uname 00:06:02.888 14:52:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.888 14:52:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68922 00:06:02.888 14:52:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.888 14:52:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.888 killing process with pid 68922 00:06:02.888 14:52:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68922' 00:06:02.888 14:52:26 -- common/autotest_common.sh@955 -- # kill 68922 00:06:02.888 14:52:26 -- common/autotest_common.sh@960 -- # wait 68922 00:06:02.888 spdk_app_start is called in Round 0. 00:06:02.888 Shutdown signal received, stop current app iteration 00:06:02.888 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:02.888 spdk_app_start is called in Round 1. 00:06:02.888 Shutdown signal received, stop current app iteration 00:06:02.888 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:02.888 spdk_app_start is called in Round 2. 00:06:02.888 Shutdown signal received, stop current app iteration 00:06:02.888 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:02.888 spdk_app_start is called in Round 3. 00:06:02.888 Shutdown signal received, stop current app iteration 00:06:02.888 14:52:26 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:02.888 14:52:26 -- event/event.sh@42 -- # return 0 00:06:02.888 00:06:02.888 real 0m18.713s 00:06:02.888 user 0m42.338s 00:06:02.888 sys 0m2.678s 00:06:02.888 14:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.888 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.888 ************************************ 00:06:02.888 END TEST app_repeat 00:06:02.888 ************************************ 00:06:02.888 14:52:26 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:02.888 14:52:26 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:02.888 14:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.888 14:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.888 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.888 ************************************ 00:06:02.888 START TEST cpu_locks 00:06:02.888 ************************************ 00:06:02.888 14:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:02.888 * Looking for test storage... 00:06:02.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:02.888 14:52:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:02.888 14:52:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:02.888 14:52:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:02.888 14:52:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:02.888 14:52:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:02.888 14:52:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:02.888 14:52:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:02.888 14:52:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:02.888 14:52:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:02.888 14:52:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.888 14:52:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:02.888 14:52:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:02.888 14:52:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:02.888 14:52:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:02.888 14:52:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:02.888 14:52:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:02.888 14:52:26 -- scripts/common.sh@344 -- # : 1 00:06:02.888 14:52:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:02.888 14:52:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.888 14:52:26 -- scripts/common.sh@364 -- # decimal 1 00:06:02.888 14:52:26 -- scripts/common.sh@352 -- # local d=1 00:06:02.888 14:52:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.888 14:52:26 -- scripts/common.sh@354 -- # echo 1 00:06:02.888 14:52:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:02.888 14:52:26 -- scripts/common.sh@365 -- # decimal 2 00:06:02.888 14:52:26 -- scripts/common.sh@352 -- # local d=2 00:06:02.888 14:52:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.888 14:52:26 -- scripts/common.sh@354 -- # echo 2 00:06:02.888 14:52:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:02.888 14:52:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:02.888 14:52:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:02.888 14:52:26 -- scripts/common.sh@367 -- # return 0 00:06:02.888 14:52:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.888 14:52:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:02.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.888 --rc genhtml_branch_coverage=1 00:06:02.888 --rc genhtml_function_coverage=1 00:06:02.888 --rc genhtml_legend=1 00:06:02.888 --rc geninfo_all_blocks=1 00:06:02.888 --rc geninfo_unexecuted_blocks=1 00:06:02.888 00:06:02.888 ' 00:06:02.888 14:52:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:02.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.888 --rc genhtml_branch_coverage=1 00:06:02.888 --rc genhtml_function_coverage=1 00:06:02.888 --rc genhtml_legend=1 00:06:02.888 --rc geninfo_all_blocks=1 00:06:02.888 --rc geninfo_unexecuted_blocks=1 00:06:02.888 00:06:02.888 ' 00:06:02.888 14:52:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:02.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.888 --rc genhtml_branch_coverage=1 00:06:02.888 --rc genhtml_function_coverage=1 00:06:02.888 --rc genhtml_legend=1 00:06:02.888 --rc geninfo_all_blocks=1 00:06:02.888 --rc geninfo_unexecuted_blocks=1 00:06:02.888 00:06:02.888 ' 00:06:02.888 14:52:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:02.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.888 --rc genhtml_branch_coverage=1 00:06:02.888 --rc genhtml_function_coverage=1 00:06:02.888 --rc genhtml_legend=1 00:06:02.888 --rc geninfo_all_blocks=1 00:06:02.888 --rc geninfo_unexecuted_blocks=1 00:06:02.888 00:06:02.888 ' 00:06:02.888 14:52:26 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:02.888 14:52:26 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:02.888 14:52:26 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:02.888 14:52:26 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:02.888 14:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.888 14:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.888 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.888 ************************************ 00:06:02.888 START TEST default_locks 00:06:02.888 ************************************ 00:06:02.889 14:52:26 -- common/autotest_common.sh@1114 -- # default_locks 00:06:02.889 14:52:26 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69554 00:06:02.889 14:52:26 -- event/cpu_locks.sh@47 -- # waitforlisten 69554 00:06:02.889 14:52:26 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.889 14:52:26 -- common/autotest_common.sh@829 -- # '[' -z 69554 ']' 00:06:02.889 14:52:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.889 14:52:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.889 14:52:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.889 14:52:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.889 14:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.889 [2024-11-18 14:52:26.460030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.889 [2024-11-18 14:52:26.460120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69554 ] 00:06:03.147 [2024-11-18 14:52:26.590375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.147 [2024-11-18 14:52:26.646056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.147 [2024-11-18 14:52:26.646225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.084 14:52:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.084 14:52:27 -- common/autotest_common.sh@862 -- # return 0 00:06:04.084 14:52:27 -- event/cpu_locks.sh@49 -- # locks_exist 69554 00:06:04.084 14:52:27 -- event/cpu_locks.sh@22 -- # lslocks -p 69554 00:06:04.084 14:52:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.343 14:52:27 -- event/cpu_locks.sh@50 -- # killprocess 69554 00:06:04.343 14:52:27 -- common/autotest_common.sh@936 -- # '[' -z 69554 ']' 00:06:04.343 14:52:27 -- common/autotest_common.sh@940 -- # kill -0 69554 00:06:04.343 14:52:27 -- common/autotest_common.sh@941 -- # uname 00:06:04.343 14:52:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.343 14:52:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69554 00:06:04.343 14:52:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.343 killing process with pid 69554 00:06:04.343 14:52:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.343 14:52:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69554' 00:06:04.343 14:52:27 -- common/autotest_common.sh@955 -- # kill 69554 00:06:04.343 14:52:27 -- common/autotest_common.sh@960 -- # wait 69554 00:06:04.910 14:52:28 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69554 00:06:04.910 14:52:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:04.910 14:52:28 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69554 00:06:04.910 14:52:28 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.910 14:52:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.910 14:52:28 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.910 14:52:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.910 14:52:28 -- common/autotest_common.sh@653 -- # waitforlisten 69554 00:06:04.910 14:52:28 -- common/autotest_common.sh@829 -- # '[' -z 69554 ']' 00:06:04.910 14:52:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.910 14:52:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.910 14:52:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.910 14:52:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.910 14:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:04.910 ERROR: process (pid: 69554) is no longer running 00:06:04.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69554) - No such process 00:06:04.910 14:52:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.910 14:52:28 -- common/autotest_common.sh@862 -- # return 1 00:06:04.910 14:52:28 -- common/autotest_common.sh@653 -- # es=1 00:06:04.910 14:52:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.910 14:52:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.910 14:52:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.910 14:52:28 -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.910 14:52:28 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.910 14:52:28 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.910 14:52:28 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.910 00:06:04.910 real 0m1.840s 00:06:04.910 user 0m1.984s 00:06:04.910 sys 0m0.540s 00:06:04.910 14:52:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.910 ************************************ 00:06:04.910 END TEST default_locks 00:06:04.910 ************************************ 00:06:04.910 14:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:04.910 14:52:28 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.910 14:52:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.910 14:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.910 14:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:04.910 ************************************ 00:06:04.910 START TEST default_locks_via_rpc 00:06:04.910 ************************************ 00:06:04.910 14:52:28 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:04.910 14:52:28 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69613 00:06:04.911 14:52:28 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.911 14:52:28 -- event/cpu_locks.sh@63 -- # waitforlisten 69613 00:06:04.911 14:52:28 -- common/autotest_common.sh@829 -- # '[' -z 69613 ']' 00:06:04.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.911 14:52:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.911 14:52:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.911 14:52:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.911 14:52:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.911 14:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:04.911 [2024-11-18 14:52:28.359785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.911 [2024-11-18 14:52:28.359914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69613 ] 00:06:05.170 [2024-11-18 14:52:28.500569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.170 [2024-11-18 14:52:28.556312] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.170 [2024-11-18 14:52:28.556539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.109 14:52:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.109 14:52:29 -- common/autotest_common.sh@862 -- # return 0 00:06:06.109 14:52:29 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.109 14:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.109 14:52:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.109 14:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.109 14:52:29 -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.109 14:52:29 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.109 14:52:29 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.109 14:52:29 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.109 14:52:29 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.109 14:52:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.109 14:52:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.109 14:52:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.109 14:52:29 -- event/cpu_locks.sh@71 -- # locks_exist 69613 00:06:06.109 14:52:29 -- event/cpu_locks.sh@22 -- # lslocks -p 69613 00:06:06.109 14:52:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.367 14:52:29 -- event/cpu_locks.sh@73 -- # killprocess 69613 00:06:06.367 14:52:29 -- common/autotest_common.sh@936 -- # '[' -z 69613 ']' 00:06:06.367 14:52:29 -- common/autotest_common.sh@940 -- # kill -0 69613 00:06:06.367 14:52:29 -- common/autotest_common.sh@941 -- # uname 00:06:06.367 14:52:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.367 14:52:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69613 00:06:06.367 killing process with pid 69613 00:06:06.367 14:52:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.367 14:52:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.367 14:52:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69613' 00:06:06.367 14:52:29 -- common/autotest_common.sh@955 -- # kill 69613 00:06:06.367 14:52:29 -- common/autotest_common.sh@960 -- # wait 69613 00:06:06.625 ************************************ 00:06:06.625 END TEST default_locks_via_rpc 00:06:06.625 ************************************ 00:06:06.625 00:06:06.625 real 0m1.898s 00:06:06.625 user 0m2.085s 00:06:06.625 sys 0m0.573s 00:06:06.625 14:52:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.625 14:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:06.883 14:52:30 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:06.883 14:52:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.883 14:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.883 14:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:06.883 ************************************ 00:06:06.883 START TEST non_locking_app_on_locked_coremask 00:06:06.883 ************************************ 00:06:06.883 14:52:30 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:06.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.883 14:52:30 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69682 00:06:06.883 14:52:30 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.883 14:52:30 -- event/cpu_locks.sh@81 -- # waitforlisten 69682 /var/tmp/spdk.sock 00:06:06.883 14:52:30 -- common/autotest_common.sh@829 -- # '[' -z 69682 ']' 00:06:06.884 14:52:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.884 14:52:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.884 14:52:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.884 14:52:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.884 14:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:06.884 [2024-11-18 14:52:30.299562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.884 [2024-11-18 14:52:30.299877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69682 ] 00:06:06.884 [2024-11-18 14:52:30.424496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.142 [2024-11-18 14:52:30.494202] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.142 [2024-11-18 14:52:30.494666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.709 14:52:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.709 14:52:31 -- common/autotest_common.sh@862 -- # return 0 00:06:07.709 14:52:31 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69710 00:06:07.709 14:52:31 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:07.709 14:52:31 -- event/cpu_locks.sh@85 -- # waitforlisten 69710 /var/tmp/spdk2.sock 00:06:07.709 14:52:31 -- common/autotest_common.sh@829 -- # '[' -z 69710 ']' 00:06:07.709 14:52:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.709 14:52:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.709 14:52:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.709 14:52:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.709 14:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:07.709 [2024-11-18 14:52:31.288082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.709 [2024-11-18 14:52:31.288376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69710 ] 00:06:07.967 [2024-11-18 14:52:31.429420] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.967 [2024-11-18 14:52:31.429461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.226 [2024-11-18 14:52:31.561398] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.226 [2024-11-18 14:52:31.561543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.792 14:52:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.792 14:52:32 -- common/autotest_common.sh@862 -- # return 0 00:06:08.792 14:52:32 -- event/cpu_locks.sh@87 -- # locks_exist 69682 00:06:08.792 14:52:32 -- event/cpu_locks.sh@22 -- # lslocks -p 69682 00:06:08.792 14:52:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.728 14:52:33 -- event/cpu_locks.sh@89 -- # killprocess 69682 00:06:09.728 14:52:33 -- common/autotest_common.sh@936 -- # '[' -z 69682 ']' 00:06:09.728 14:52:33 -- common/autotest_common.sh@940 -- # kill -0 69682 00:06:09.728 14:52:33 -- common/autotest_common.sh@941 -- # uname 00:06:09.728 14:52:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.729 14:52:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69682 00:06:09.729 killing process with pid 69682 00:06:09.729 14:52:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.729 14:52:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.729 14:52:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69682' 00:06:09.729 14:52:33 -- common/autotest_common.sh@955 -- # kill 69682 00:06:09.729 14:52:33 -- common/autotest_common.sh@960 -- # wait 69682 00:06:10.296 14:52:33 -- event/cpu_locks.sh@90 -- # killprocess 69710 00:06:10.296 14:52:33 -- common/autotest_common.sh@936 -- # '[' -z 69710 ']' 00:06:10.296 14:52:33 -- common/autotest_common.sh@940 -- # kill -0 69710 00:06:10.296 14:52:33 -- common/autotest_common.sh@941 -- # uname 00:06:10.296 14:52:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.296 14:52:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69710 00:06:10.296 killing process with pid 69710 00:06:10.296 14:52:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.296 14:52:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.296 14:52:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69710' 00:06:10.296 14:52:33 -- common/autotest_common.sh@955 -- # kill 69710 00:06:10.296 14:52:33 -- common/autotest_common.sh@960 -- # wait 69710 00:06:10.554 00:06:10.554 real 0m3.837s 00:06:10.554 user 0m4.222s 00:06:10.554 sys 0m1.115s 00:06:10.554 14:52:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.554 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.554 ************************************ 00:06:10.554 END TEST non_locking_app_on_locked_coremask 00:06:10.554 ************************************ 00:06:10.554 14:52:34 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:10.554 14:52:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.554 14:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.554 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.813 ************************************ 00:06:10.813 START TEST locking_app_on_unlocked_coremask 00:06:10.813 ************************************ 00:06:10.813 14:52:34 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:10.813 14:52:34 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69789 00:06:10.813 14:52:34 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:10.813 14:52:34 -- event/cpu_locks.sh@99 -- # waitforlisten 69789 /var/tmp/spdk.sock 00:06:10.813 14:52:34 -- common/autotest_common.sh@829 -- # '[' -z 69789 ']' 00:06:10.813 14:52:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.813 14:52:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.813 14:52:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.813 14:52:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.813 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.813 [2024-11-18 14:52:34.207554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.813 [2024-11-18 14:52:34.207667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69789 ] 00:06:10.813 [2024-11-18 14:52:34.347437] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.813 [2024-11-18 14:52:34.347477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.072 [2024-11-18 14:52:34.404253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.072 [2024-11-18 14:52:34.404481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.640 14:52:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.640 14:52:35 -- common/autotest_common.sh@862 -- # return 0 00:06:11.640 14:52:35 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69817 00:06:11.640 14:52:35 -- event/cpu_locks.sh@103 -- # waitforlisten 69817 /var/tmp/spdk2.sock 00:06:11.640 14:52:35 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.640 14:52:35 -- common/autotest_common.sh@829 -- # '[' -z 69817 ']' 00:06:11.640 14:52:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.640 14:52:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.640 14:52:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.640 14:52:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.640 14:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:11.899 [2024-11-18 14:52:35.251727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.899 [2024-11-18 14:52:35.252080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69817 ] 00:06:11.899 [2024-11-18 14:52:35.401998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.158 [2024-11-18 14:52:35.548461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.158 [2024-11-18 14:52:35.548626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.532 14:52:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.532 14:52:36 -- common/autotest_common.sh@862 -- # return 0 00:06:13.532 14:52:36 -- event/cpu_locks.sh@105 -- # locks_exist 69817 00:06:13.532 14:52:36 -- event/cpu_locks.sh@22 -- # lslocks -p 69817 00:06:13.532 14:52:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.098 14:52:37 -- event/cpu_locks.sh@107 -- # killprocess 69789 00:06:14.098 14:52:37 -- common/autotest_common.sh@936 -- # '[' -z 69789 ']' 00:06:14.098 14:52:37 -- common/autotest_common.sh@940 -- # kill -0 69789 00:06:14.098 14:52:37 -- common/autotest_common.sh@941 -- # uname 00:06:14.098 14:52:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.098 14:52:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69789 00:06:14.357 killing process with pid 69789 00:06:14.357 14:52:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.357 14:52:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.357 14:52:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69789' 00:06:14.357 14:52:37 -- common/autotest_common.sh@955 -- # kill 69789 00:06:14.357 14:52:37 -- common/autotest_common.sh@960 -- # wait 69789 00:06:15.292 14:52:38 -- event/cpu_locks.sh@108 -- # killprocess 69817 00:06:15.292 14:52:38 -- common/autotest_common.sh@936 -- # '[' -z 69817 ']' 00:06:15.292 14:52:38 -- common/autotest_common.sh@940 -- # kill -0 69817 00:06:15.292 14:52:38 -- common/autotest_common.sh@941 -- # uname 00:06:15.292 14:52:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.292 14:52:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69817 00:06:15.292 killing process with pid 69817 00:06:15.292 14:52:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.292 14:52:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.292 14:52:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69817' 00:06:15.292 14:52:38 -- common/autotest_common.sh@955 -- # kill 69817 00:06:15.292 14:52:38 -- common/autotest_common.sh@960 -- # wait 69817 00:06:15.859 ************************************ 00:06:15.859 END TEST locking_app_on_unlocked_coremask 00:06:15.859 ************************************ 00:06:15.859 00:06:15.859 real 0m5.012s 00:06:15.859 user 0m5.518s 00:06:15.859 sys 0m1.210s 00:06:15.859 14:52:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.859 14:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:15.859 14:52:39 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.859 14:52:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.859 14:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.859 14:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:15.859 ************************************ 00:06:15.859 START TEST locking_app_on_locked_coremask 00:06:15.859 ************************************ 00:06:15.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.859 14:52:39 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:15.859 14:52:39 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69915 00:06:15.859 14:52:39 -- event/cpu_locks.sh@116 -- # waitforlisten 69915 /var/tmp/spdk.sock 00:06:15.859 14:52:39 -- common/autotest_common.sh@829 -- # '[' -z 69915 ']' 00:06:15.859 14:52:39 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.859 14:52:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.859 14:52:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.859 14:52:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.859 14:52:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.859 14:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:15.859 [2024-11-18 14:52:39.275032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.859 [2024-11-18 14:52:39.275140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69915 ] 00:06:15.859 [2024-11-18 14:52:39.413192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.118 [2024-11-18 14:52:39.475443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.118 [2024-11-18 14:52:39.475613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.685 14:52:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.685 14:52:40 -- common/autotest_common.sh@862 -- # return 0 00:06:16.685 14:52:40 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69943 00:06:16.685 14:52:40 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.685 14:52:40 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69943 /var/tmp/spdk2.sock 00:06:16.685 14:52:40 -- common/autotest_common.sh@650 -- # local es=0 00:06:16.685 14:52:40 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69943 /var/tmp/spdk2.sock 00:06:16.685 14:52:40 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:16.685 14:52:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.685 14:52:40 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:16.685 14:52:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.685 14:52:40 -- common/autotest_common.sh@653 -- # waitforlisten 69943 /var/tmp/spdk2.sock 00:06:16.685 14:52:40 -- common/autotest_common.sh@829 -- # '[' -z 69943 ']' 00:06:16.685 14:52:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.685 14:52:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.685 14:52:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.685 14:52:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.685 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:16.943 [2024-11-18 14:52:40.313365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.943 [2024-11-18 14:52:40.313630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69943 ] 00:06:16.943 [2024-11-18 14:52:40.454351] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69915 has claimed it. 00:06:16.943 [2024-11-18 14:52:40.454418] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.509 ERROR: process (pid: 69943) is no longer running 00:06:17.509 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69943) - No such process 00:06:17.509 14:52:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.509 14:52:41 -- common/autotest_common.sh@862 -- # return 1 00:06:17.509 14:52:41 -- common/autotest_common.sh@653 -- # es=1 00:06:17.509 14:52:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.509 14:52:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.509 14:52:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.509 14:52:41 -- event/cpu_locks.sh@122 -- # locks_exist 69915 00:06:17.509 14:52:41 -- event/cpu_locks.sh@22 -- # lslocks -p 69915 00:06:17.509 14:52:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.077 14:52:41 -- event/cpu_locks.sh@124 -- # killprocess 69915 00:06:18.077 14:52:41 -- common/autotest_common.sh@936 -- # '[' -z 69915 ']' 00:06:18.077 14:52:41 -- common/autotest_common.sh@940 -- # kill -0 69915 00:06:18.077 14:52:41 -- common/autotest_common.sh@941 -- # uname 00:06:18.077 14:52:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.077 14:52:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69915 00:06:18.077 killing process with pid 69915 00:06:18.077 14:52:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.078 14:52:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.078 14:52:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69915' 00:06:18.078 14:52:41 -- common/autotest_common.sh@955 -- # kill 69915 00:06:18.078 14:52:41 -- common/autotest_common.sh@960 -- # wait 69915 00:06:18.645 ************************************ 00:06:18.645 END TEST locking_app_on_locked_coremask 00:06:18.645 ************************************ 00:06:18.645 00:06:18.645 real 0m2.802s 00:06:18.645 user 0m3.197s 00:06:18.645 sys 0m0.727s 00:06:18.645 14:52:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.645 14:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.645 14:52:42 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.645 14:52:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.645 14:52:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.645 14:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.645 ************************************ 00:06:18.645 START TEST locking_overlapped_coremask 00:06:18.645 ************************************ 00:06:18.645 14:52:42 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:18.645 14:52:42 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70000 00:06:18.645 14:52:42 -- event/cpu_locks.sh@133 -- # waitforlisten 70000 /var/tmp/spdk.sock 00:06:18.645 14:52:42 -- common/autotest_common.sh@829 -- # '[' -z 70000 ']' 00:06:18.645 14:52:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.645 14:52:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.645 14:52:42 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.645 14:52:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.645 14:52:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.645 14:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.645 [2024-11-18 14:52:42.129541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.645 [2024-11-18 14:52:42.129650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70000 ] 00:06:18.904 [2024-11-18 14:52:42.266606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.904 [2024-11-18 14:52:42.330468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.904 [2024-11-18 14:52:42.331086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.904 [2024-11-18 14:52:42.331153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.904 [2024-11-18 14:52:42.331154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.838 14:52:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.838 14:52:43 -- common/autotest_common.sh@862 -- # return 0 00:06:19.838 14:52:43 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.838 14:52:43 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70031 00:06:19.838 14:52:43 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70031 /var/tmp/spdk2.sock 00:06:19.838 14:52:43 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.838 14:52:43 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70031 /var/tmp/spdk2.sock 00:06:19.838 14:52:43 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.838 14:52:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.838 14:52:43 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.838 14:52:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.838 14:52:43 -- common/autotest_common.sh@653 -- # waitforlisten 70031 /var/tmp/spdk2.sock 00:06:19.838 14:52:43 -- common/autotest_common.sh@829 -- # '[' -z 70031 ']' 00:06:19.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.838 14:52:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.838 14:52:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.838 14:52:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.838 14:52:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.838 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.838 [2024-11-18 14:52:43.159270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.838 [2024-11-18 14:52:43.159367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70031 ] 00:06:19.838 [2024-11-18 14:52:43.300361] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70000 has claimed it. 00:06:19.838 [2024-11-18 14:52:43.300427] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.404 ERROR: process (pid: 70031) is no longer running 00:06:20.404 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70031) - No such process 00:06:20.404 14:52:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.404 14:52:43 -- common/autotest_common.sh@862 -- # return 1 00:06:20.404 14:52:43 -- common/autotest_common.sh@653 -- # es=1 00:06:20.404 14:52:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.404 14:52:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.404 14:52:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.404 14:52:43 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.404 14:52:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.404 14:52:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.404 14:52:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.404 14:52:43 -- event/cpu_locks.sh@141 -- # killprocess 70000 00:06:20.404 14:52:43 -- common/autotest_common.sh@936 -- # '[' -z 70000 ']' 00:06:20.404 14:52:43 -- common/autotest_common.sh@940 -- # kill -0 70000 00:06:20.404 14:52:43 -- common/autotest_common.sh@941 -- # uname 00:06:20.404 14:52:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.404 14:52:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70000 00:06:20.404 14:52:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.404 14:52:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.404 14:52:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70000' 00:06:20.404 killing process with pid 70000 00:06:20.404 14:52:43 -- common/autotest_common.sh@955 -- # kill 70000 00:06:20.404 14:52:43 -- common/autotest_common.sh@960 -- # wait 70000 00:06:20.971 00:06:20.971 real 0m2.379s 00:06:20.971 user 0m6.707s 00:06:20.971 sys 0m0.502s 00:06:20.971 14:52:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.971 ************************************ 00:06:20.972 END TEST locking_overlapped_coremask 00:06:20.972 ************************************ 00:06:20.972 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:20.972 14:52:44 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:20.972 14:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.972 14:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.972 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:20.972 ************************************ 00:06:20.972 START TEST locking_overlapped_coremask_via_rpc 00:06:20.972 ************************************ 00:06:20.972 14:52:44 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:20.972 14:52:44 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70077 00:06:20.972 14:52:44 -- event/cpu_locks.sh@149 -- # waitforlisten 70077 /var/tmp/spdk.sock 00:06:20.972 14:52:44 -- common/autotest_common.sh@829 -- # '[' -z 70077 ']' 00:06:20.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.972 14:52:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.972 14:52:44 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:20.972 14:52:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.972 14:52:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.972 14:52:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.972 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:20.972 [2024-11-18 14:52:44.554194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.230 [2024-11-18 14:52:44.555142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70077 ] 00:06:21.230 [2024-11-18 14:52:44.693055] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.230 [2024-11-18 14:52:44.693098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.231 [2024-11-18 14:52:44.751490] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.231 [2024-11-18 14:52:44.752275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.231 [2024-11-18 14:52:44.752153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.231 [2024-11-18 14:52:44.752270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.167 14:52:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.167 14:52:45 -- common/autotest_common.sh@862 -- # return 0 00:06:22.167 14:52:45 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.167 14:52:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70107 00:06:22.167 14:52:45 -- event/cpu_locks.sh@153 -- # waitforlisten 70107 /var/tmp/spdk2.sock 00:06:22.167 14:52:45 -- common/autotest_common.sh@829 -- # '[' -z 70107 ']' 00:06:22.167 14:52:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.167 14:52:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.167 14:52:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.167 14:52:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.167 14:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.167 [2024-11-18 14:52:45.514030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.167 [2024-11-18 14:52:45.514866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70107 ] 00:06:22.167 [2024-11-18 14:52:45.653588] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.167 [2024-11-18 14:52:45.653641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.426 [2024-11-18 14:52:45.799986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.426 [2024-11-18 14:52:45.800259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.426 [2024-11-18 14:52:45.807452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.426 [2024-11-18 14:52:45.807452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.994 14:52:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.994 14:52:46 -- common/autotest_common.sh@862 -- # return 0 00:06:22.994 14:52:46 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.994 14:52:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.994 14:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.994 14:52:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.994 14:52:46 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.994 14:52:46 -- common/autotest_common.sh@650 -- # local es=0 00:06:22.994 14:52:46 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.994 14:52:46 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:22.994 14:52:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.994 14:52:46 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:22.994 14:52:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.994 14:52:46 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.994 14:52:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.994 14:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:22.994 [2024-11-18 14:52:46.548470] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70077 has claimed it. 00:06:22.994 2024/11/18 14:52:46 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:22.994 request: 00:06:22.994 { 00:06:22.994 "method": "framework_enable_cpumask_locks", 00:06:22.994 "params": {} 00:06:22.994 } 00:06:22.994 Got JSON-RPC error response 00:06:22.994 GoRPCClient: error on JSON-RPC call 00:06:22.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.994 14:52:46 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:22.994 14:52:46 -- common/autotest_common.sh@653 -- # es=1 00:06:22.994 14:52:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.994 14:52:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.994 14:52:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.994 14:52:46 -- event/cpu_locks.sh@158 -- # waitforlisten 70077 /var/tmp/spdk.sock 00:06:22.994 14:52:46 -- common/autotest_common.sh@829 -- # '[' -z 70077 ']' 00:06:22.994 14:52:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.994 14:52:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.994 14:52:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.994 14:52:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.994 14:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.253 14:52:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.253 14:52:46 -- common/autotest_common.sh@862 -- # return 0 00:06:23.253 14:52:46 -- event/cpu_locks.sh@159 -- # waitforlisten 70107 /var/tmp/spdk2.sock 00:06:23.253 14:52:46 -- common/autotest_common.sh@829 -- # '[' -z 70107 ']' 00:06:23.253 14:52:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.253 14:52:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.253 14:52:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.253 14:52:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.253 14:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:23.511 14:52:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.511 14:52:47 -- common/autotest_common.sh@862 -- # return 0 00:06:23.511 14:52:47 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:23.511 14:52:47 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.511 14:52:47 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.511 14:52:47 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.511 ************************************ 00:06:23.511 END TEST locking_overlapped_coremask_via_rpc 00:06:23.511 ************************************ 00:06:23.511 00:06:23.511 real 0m2.587s 00:06:23.511 user 0m1.302s 00:06:23.511 sys 0m0.218s 00:06:23.511 14:52:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.511 14:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.769 14:52:47 -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.769 14:52:47 -- event/cpu_locks.sh@15 -- # [[ -z 70077 ]] 00:06:23.769 14:52:47 -- event/cpu_locks.sh@15 -- # killprocess 70077 00:06:23.769 14:52:47 -- common/autotest_common.sh@936 -- # '[' -z 70077 ']' 00:06:23.769 14:52:47 -- common/autotest_common.sh@940 -- # kill -0 70077 00:06:23.769 14:52:47 -- common/autotest_common.sh@941 -- # uname 00:06:23.769 14:52:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.769 14:52:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70077 00:06:23.769 killing process with pid 70077 00:06:23.769 14:52:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.769 14:52:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.769 14:52:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70077' 00:06:23.769 14:52:47 -- common/autotest_common.sh@955 -- # kill 70077 00:06:23.769 14:52:47 -- common/autotest_common.sh@960 -- # wait 70077 00:06:24.338 14:52:47 -- event/cpu_locks.sh@16 -- # [[ -z 70107 ]] 00:06:24.338 14:52:47 -- event/cpu_locks.sh@16 -- # killprocess 70107 00:06:24.338 14:52:47 -- common/autotest_common.sh@936 -- # '[' -z 70107 ']' 00:06:24.338 14:52:47 -- common/autotest_common.sh@940 -- # kill -0 70107 00:06:24.338 14:52:47 -- common/autotest_common.sh@941 -- # uname 00:06:24.338 14:52:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.338 14:52:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70107 00:06:24.338 killing process with pid 70107 00:06:24.338 14:52:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:24.338 14:52:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:24.338 14:52:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70107' 00:06:24.338 14:52:47 -- common/autotest_common.sh@955 -- # kill 70107 00:06:24.338 14:52:47 -- common/autotest_common.sh@960 -- # wait 70107 00:06:24.597 14:52:48 -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.597 14:52:48 -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.597 14:52:48 -- event/cpu_locks.sh@15 -- # [[ -z 70077 ]] 00:06:24.597 14:52:48 -- event/cpu_locks.sh@15 -- # killprocess 70077 00:06:24.597 14:52:48 -- common/autotest_common.sh@936 -- # '[' -z 70077 ']' 00:06:24.597 14:52:48 -- common/autotest_common.sh@940 -- # kill -0 70077 00:06:24.597 Process with pid 70077 is not found 00:06:24.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70077) - No such process 00:06:24.597 14:52:48 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70077 is not found' 00:06:24.597 14:52:48 -- event/cpu_locks.sh@16 -- # [[ -z 70107 ]] 00:06:24.597 14:52:48 -- event/cpu_locks.sh@16 -- # killprocess 70107 00:06:24.597 Process with pid 70107 is not found 00:06:24.597 14:52:48 -- common/autotest_common.sh@936 -- # '[' -z 70107 ']' 00:06:24.597 14:52:48 -- common/autotest_common.sh@940 -- # kill -0 70107 00:06:24.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70107) - No such process 00:06:24.597 14:52:48 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70107 is not found' 00:06:24.597 14:52:48 -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.597 00:06:24.597 real 0m21.947s 00:06:24.597 user 0m38.388s 00:06:24.597 sys 0m5.877s 00:06:24.597 14:52:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.597 14:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.597 ************************************ 00:06:24.597 END TEST cpu_locks 00:06:24.597 ************************************ 00:06:24.856 00:06:24.856 real 0m49.667s 00:06:24.856 user 1m35.990s 00:06:24.856 sys 0m9.410s 00:06:24.856 14:52:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.856 ************************************ 00:06:24.856 END TEST event 00:06:24.856 14:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.856 ************************************ 00:06:24.856 14:52:48 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.856 14:52:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.856 14:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.856 14:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.856 ************************************ 00:06:24.856 START TEST thread 00:06:24.856 ************************************ 00:06:24.856 14:52:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.856 * Looking for test storage... 00:06:24.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:24.856 14:52:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:24.856 14:52:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:24.856 14:52:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:24.856 14:52:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:24.856 14:52:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:24.856 14:52:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:24.856 14:52:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:24.856 14:52:48 -- scripts/common.sh@335 -- # IFS=.-: 00:06:24.856 14:52:48 -- scripts/common.sh@335 -- # read -ra ver1 00:06:24.856 14:52:48 -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.856 14:52:48 -- scripts/common.sh@336 -- # read -ra ver2 00:06:24.856 14:52:48 -- scripts/common.sh@337 -- # local 'op=<' 00:06:24.856 14:52:48 -- scripts/common.sh@339 -- # ver1_l=2 00:06:24.856 14:52:48 -- scripts/common.sh@340 -- # ver2_l=1 00:06:24.856 14:52:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:24.856 14:52:48 -- scripts/common.sh@343 -- # case "$op" in 00:06:24.856 14:52:48 -- scripts/common.sh@344 -- # : 1 00:06:24.856 14:52:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:24.856 14:52:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.856 14:52:48 -- scripts/common.sh@364 -- # decimal 1 00:06:24.856 14:52:48 -- scripts/common.sh@352 -- # local d=1 00:06:24.856 14:52:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.856 14:52:48 -- scripts/common.sh@354 -- # echo 1 00:06:24.856 14:52:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:24.856 14:52:48 -- scripts/common.sh@365 -- # decimal 2 00:06:24.856 14:52:48 -- scripts/common.sh@352 -- # local d=2 00:06:24.856 14:52:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.856 14:52:48 -- scripts/common.sh@354 -- # echo 2 00:06:24.856 14:52:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:24.856 14:52:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:24.856 14:52:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:24.856 14:52:48 -- scripts/common.sh@367 -- # return 0 00:06:24.856 14:52:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.856 14:52:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:24.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.856 --rc genhtml_branch_coverage=1 00:06:24.856 --rc genhtml_function_coverage=1 00:06:24.856 --rc genhtml_legend=1 00:06:24.856 --rc geninfo_all_blocks=1 00:06:24.856 --rc geninfo_unexecuted_blocks=1 00:06:24.856 00:06:24.856 ' 00:06:24.856 14:52:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:24.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.856 --rc genhtml_branch_coverage=1 00:06:24.856 --rc genhtml_function_coverage=1 00:06:24.856 --rc genhtml_legend=1 00:06:24.856 --rc geninfo_all_blocks=1 00:06:24.856 --rc geninfo_unexecuted_blocks=1 00:06:24.856 00:06:24.856 ' 00:06:24.856 14:52:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:24.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.856 --rc genhtml_branch_coverage=1 00:06:24.856 --rc genhtml_function_coverage=1 00:06:24.856 --rc genhtml_legend=1 00:06:24.856 --rc geninfo_all_blocks=1 00:06:24.856 --rc geninfo_unexecuted_blocks=1 00:06:24.856 00:06:24.856 ' 00:06:24.856 14:52:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:24.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.856 --rc genhtml_branch_coverage=1 00:06:24.856 --rc genhtml_function_coverage=1 00:06:24.856 --rc genhtml_legend=1 00:06:24.856 --rc geninfo_all_blocks=1 00:06:24.856 --rc geninfo_unexecuted_blocks=1 00:06:24.856 00:06:24.856 ' 00:06:24.856 14:52:48 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.857 14:52:48 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:24.857 14:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.857 14:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.857 ************************************ 00:06:24.857 START TEST thread_poller_perf 00:06:24.857 ************************************ 00:06:25.116 14:52:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.116 [2024-11-18 14:52:48.462492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.116 [2024-11-18 14:52:48.462739] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70266 ] 00:06:25.116 [2024-11-18 14:52:48.600670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.116 [2024-11-18 14:52:48.671677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.116 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.490 [2024-11-18T14:52:50.080Z] ====================================== 00:06:26.490 [2024-11-18T14:52:50.080Z] busy:2206269982 (cyc) 00:06:26.490 [2024-11-18T14:52:50.080Z] total_run_count: 387000 00:06:26.490 [2024-11-18T14:52:50.080Z] tsc_hz: 2200000000 (cyc) 00:06:26.490 [2024-11-18T14:52:50.080Z] ====================================== 00:06:26.490 [2024-11-18T14:52:50.080Z] poller_cost: 5700 (cyc), 2590 (nsec) 00:06:26.490 00:06:26.490 ************************************ 00:06:26.490 END TEST thread_poller_perf 00:06:26.490 ************************************ 00:06:26.490 real 0m1.305s 00:06:26.490 user 0m1.131s 00:06:26.490 sys 0m0.065s 00:06:26.490 14:52:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.490 14:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:26.490 14:52:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.490 14:52:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:26.490 14:52:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.490 14:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:26.490 ************************************ 00:06:26.490 START TEST thread_poller_perf 00:06:26.490 ************************************ 00:06:26.490 14:52:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.490 [2024-11-18 14:52:49.813511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.490 [2024-11-18 14:52:49.813623] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70296 ] 00:06:26.490 [2024-11-18 14:52:49.951237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.490 [2024-11-18 14:52:50.023708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.490 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.865 [2024-11-18T14:52:51.455Z] ====================================== 00:06:27.865 [2024-11-18T14:52:51.455Z] busy:2203530776 (cyc) 00:06:27.865 [2024-11-18T14:52:51.455Z] total_run_count: 5341000 00:06:27.865 [2024-11-18T14:52:51.455Z] tsc_hz: 2200000000 (cyc) 00:06:27.865 [2024-11-18T14:52:51.455Z] ====================================== 00:06:27.865 [2024-11-18T14:52:51.455Z] poller_cost: 412 (cyc), 187 (nsec) 00:06:27.865 00:06:27.865 real 0m1.303s 00:06:27.865 user 0m1.132s 00:06:27.865 sys 0m0.064s 00:06:27.865 14:52:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.865 ************************************ 00:06:27.865 END TEST thread_poller_perf 00:06:27.865 14:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.865 ************************************ 00:06:27.865 14:52:51 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.865 00:06:27.865 real 0m2.885s 00:06:27.865 user 0m2.411s 00:06:27.865 sys 0m0.259s 00:06:27.865 14:52:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.865 14:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.865 ************************************ 00:06:27.865 END TEST thread 00:06:27.865 ************************************ 00:06:27.865 14:52:51 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:27.865 14:52:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.865 14:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.865 14:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.865 ************************************ 00:06:27.865 START TEST accel 00:06:27.865 ************************************ 00:06:27.865 14:52:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:27.865 * Looking for test storage... 00:06:27.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:27.865 14:52:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:27.865 14:52:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:27.865 14:52:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:27.865 14:52:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:27.865 14:52:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:27.865 14:52:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:27.865 14:52:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:27.865 14:52:51 -- scripts/common.sh@335 -- # IFS=.-: 00:06:27.865 14:52:51 -- scripts/common.sh@335 -- # read -ra ver1 00:06:27.865 14:52:51 -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.865 14:52:51 -- scripts/common.sh@336 -- # read -ra ver2 00:06:27.865 14:52:51 -- scripts/common.sh@337 -- # local 'op=<' 00:06:27.865 14:52:51 -- scripts/common.sh@339 -- # ver1_l=2 00:06:27.865 14:52:51 -- scripts/common.sh@340 -- # ver2_l=1 00:06:27.865 14:52:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:27.865 14:52:51 -- scripts/common.sh@343 -- # case "$op" in 00:06:27.865 14:52:51 -- scripts/common.sh@344 -- # : 1 00:06:27.865 14:52:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:27.865 14:52:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.865 14:52:51 -- scripts/common.sh@364 -- # decimal 1 00:06:27.865 14:52:51 -- scripts/common.sh@352 -- # local d=1 00:06:27.865 14:52:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.865 14:52:51 -- scripts/common.sh@354 -- # echo 1 00:06:27.865 14:52:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:27.865 14:52:51 -- scripts/common.sh@365 -- # decimal 2 00:06:27.865 14:52:51 -- scripts/common.sh@352 -- # local d=2 00:06:27.865 14:52:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.865 14:52:51 -- scripts/common.sh@354 -- # echo 2 00:06:27.865 14:52:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:27.865 14:52:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:27.865 14:52:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:27.865 14:52:51 -- scripts/common.sh@367 -- # return 0 00:06:27.865 14:52:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.865 14:52:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.865 --rc genhtml_branch_coverage=1 00:06:27.865 --rc genhtml_function_coverage=1 00:06:27.865 --rc genhtml_legend=1 00:06:27.865 --rc geninfo_all_blocks=1 00:06:27.865 --rc geninfo_unexecuted_blocks=1 00:06:27.865 00:06:27.865 ' 00:06:27.865 14:52:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.865 --rc genhtml_branch_coverage=1 00:06:27.865 --rc genhtml_function_coverage=1 00:06:27.865 --rc genhtml_legend=1 00:06:27.865 --rc geninfo_all_blocks=1 00:06:27.865 --rc geninfo_unexecuted_blocks=1 00:06:27.865 00:06:27.865 ' 00:06:27.865 14:52:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.865 --rc genhtml_branch_coverage=1 00:06:27.865 --rc genhtml_function_coverage=1 00:06:27.865 --rc genhtml_legend=1 00:06:27.865 --rc geninfo_all_blocks=1 00:06:27.865 --rc geninfo_unexecuted_blocks=1 00:06:27.865 00:06:27.865 ' 00:06:27.865 14:52:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.865 --rc genhtml_branch_coverage=1 00:06:27.865 --rc genhtml_function_coverage=1 00:06:27.865 --rc genhtml_legend=1 00:06:27.865 --rc geninfo_all_blocks=1 00:06:27.865 --rc geninfo_unexecuted_blocks=1 00:06:27.865 00:06:27.865 ' 00:06:27.865 14:52:51 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:27.865 14:52:51 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:27.865 14:52:51 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.865 14:52:51 -- accel/accel.sh@59 -- # spdk_tgt_pid=70385 00:06:27.865 14:52:51 -- accel/accel.sh@60 -- # waitforlisten 70385 00:06:27.865 14:52:51 -- common/autotest_common.sh@829 -- # '[' -z 70385 ']' 00:06:27.865 14:52:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.865 14:52:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.865 14:52:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.865 14:52:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.865 14:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.865 14:52:51 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:27.865 14:52:51 -- accel/accel.sh@58 -- # build_accel_config 00:06:27.865 14:52:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.865 14:52:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.865 14:52:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.865 14:52:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.865 14:52:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.865 14:52:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.865 14:52:51 -- accel/accel.sh@42 -- # jq -r . 00:06:28.123 [2024-11-18 14:52:51.453221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.123 [2024-11-18 14:52:51.453344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70385 ] 00:06:28.123 [2024-11-18 14:52:51.595595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.123 [2024-11-18 14:52:51.669208] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.123 [2024-11-18 14:52:51.669433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.104 14:52:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.104 14:52:52 -- common/autotest_common.sh@862 -- # return 0 00:06:29.104 14:52:52 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:29.104 14:52:52 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:29.104 14:52:52 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:29.104 14:52:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.104 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.104 14:52:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.104 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.104 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.104 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.104 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # IFS== 00:06:29.105 14:52:52 -- accel/accel.sh@64 -- # read -r opc module 00:06:29.105 14:52:52 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:29.105 14:52:52 -- accel/accel.sh@67 -- # killprocess 70385 00:06:29.105 14:52:52 -- common/autotest_common.sh@936 -- # '[' -z 70385 ']' 00:06:29.105 14:52:52 -- common/autotest_common.sh@940 -- # kill -0 70385 00:06:29.105 14:52:52 -- common/autotest_common.sh@941 -- # uname 00:06:29.105 14:52:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.105 14:52:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70385 00:06:29.105 14:52:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.105 killing process with pid 70385 00:06:29.105 14:52:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.105 14:52:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70385' 00:06:29.105 14:52:52 -- common/autotest_common.sh@955 -- # kill 70385 00:06:29.105 14:52:52 -- common/autotest_common.sh@960 -- # wait 70385 00:06:29.364 14:52:52 -- accel/accel.sh@68 -- # trap - ERR 00:06:29.364 14:52:52 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:29.364 14:52:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:29.364 14:52:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.364 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.364 14:52:52 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:29.364 14:52:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:29.364 14:52:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.364 14:52:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.364 14:52:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.364 14:52:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.364 14:52:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.364 14:52:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.364 14:52:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.364 14:52:52 -- accel/accel.sh@42 -- # jq -r . 00:06:29.364 14:52:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.364 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.364 14:52:52 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:29.364 14:52:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:29.364 14:52:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.364 14:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:29.364 ************************************ 00:06:29.364 START TEST accel_missing_filename 00:06:29.364 ************************************ 00:06:29.364 14:52:52 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:29.364 14:52:52 -- common/autotest_common.sh@650 -- # local es=0 00:06:29.364 14:52:52 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:29.364 14:52:52 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:29.364 14:52:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.364 14:52:52 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:29.364 14:52:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.364 14:52:52 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:29.364 14:52:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:29.364 14:52:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.364 14:52:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.364 14:52:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.364 14:52:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.364 14:52:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.364 14:52:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.364 14:52:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.364 14:52:52 -- accel/accel.sh@42 -- # jq -r . 00:06:29.364 [2024-11-18 14:52:52.895048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.364 [2024-11-18 14:52:52.895142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70450 ] 00:06:29.623 [2024-11-18 14:52:53.033126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.623 [2024-11-18 14:52:53.097746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.623 [2024-11-18 14:52:53.162061] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.882 [2024-11-18 14:52:53.246243] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:29.882 A filename is required. 00:06:29.882 14:52:53 -- common/autotest_common.sh@653 -- # es=234 00:06:29.882 14:52:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.882 14:52:53 -- common/autotest_common.sh@662 -- # es=106 00:06:29.882 14:52:53 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:29.882 14:52:53 -- common/autotest_common.sh@670 -- # es=1 00:06:29.882 14:52:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.882 00:06:29.882 real 0m0.456s 00:06:29.882 user 0m0.287s 00:06:29.882 sys 0m0.113s 00:06:29.882 14:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.882 ************************************ 00:06:29.882 END TEST accel_missing_filename 00:06:29.882 ************************************ 00:06:29.882 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.882 14:52:53 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.882 14:52:53 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:29.882 14:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.882 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.882 ************************************ 00:06:29.882 START TEST accel_compress_verify 00:06:29.882 ************************************ 00:06:29.882 14:52:53 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.882 14:52:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:29.882 14:52:53 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.882 14:52:53 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:29.882 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.882 14:52:53 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:29.882 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.882 14:52:53 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.882 14:52:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:29.882 14:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.882 14:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.882 14:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.882 14:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.882 14:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.882 14:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.882 14:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.882 14:52:53 -- accel/accel.sh@42 -- # jq -r . 00:06:29.882 [2024-11-18 14:52:53.403613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.882 [2024-11-18 14:52:53.403719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70475 ] 00:06:30.142 [2024-11-18 14:52:53.540789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.142 [2024-11-18 14:52:53.600661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.142 [2024-11-18 14:52:53.654947] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.142 [2024-11-18 14:52:53.726309] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:30.401 00:06:30.401 Compression does not support the verify option, aborting. 00:06:30.401 14:52:53 -- common/autotest_common.sh@653 -- # es=161 00:06:30.401 14:52:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.401 14:52:53 -- common/autotest_common.sh@662 -- # es=33 00:06:30.401 14:52:53 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:30.401 14:52:53 -- common/autotest_common.sh@670 -- # es=1 00:06:30.401 14:52:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.401 00:06:30.401 real 0m0.423s 00:06:30.401 user 0m0.251s 00:06:30.401 sys 0m0.121s 00:06:30.401 14:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.401 ************************************ 00:06:30.401 END TEST accel_compress_verify 00:06:30.401 ************************************ 00:06:30.401 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.401 14:52:53 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:30.401 14:52:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:30.401 14:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.401 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.401 ************************************ 00:06:30.401 START TEST accel_wrong_workload 00:06:30.401 ************************************ 00:06:30.401 14:52:53 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:30.401 14:52:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:30.401 14:52:53 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:30.401 14:52:53 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:30.401 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.401 14:52:53 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:30.401 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.401 14:52:53 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:30.401 14:52:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:30.401 14:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.401 14:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.401 14:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.401 14:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.401 14:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.401 14:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.401 14:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.401 14:52:53 -- accel/accel.sh@42 -- # jq -r . 00:06:30.401 Unsupported workload type: foobar 00:06:30.401 [2024-11-18 14:52:53.868848] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:30.401 accel_perf options: 00:06:30.401 [-h help message] 00:06:30.401 [-q queue depth per core] 00:06:30.401 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.401 [-T number of threads per core 00:06:30.401 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.401 [-t time in seconds] 00:06:30.401 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.401 [ dif_verify, , dif_generate, dif_generate_copy 00:06:30.401 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.401 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.402 [-S for crc32c workload, use this seed value (default 0) 00:06:30.402 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.402 [-f for fill workload, use this BYTE value (default 255) 00:06:30.402 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.402 [-y verify result if this switch is on] 00:06:30.402 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.402 Can be used to spread operations across a wider range of memory. 00:06:30.402 14:52:53 -- common/autotest_common.sh@653 -- # es=1 00:06:30.402 14:52:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.402 14:52:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.402 14:52:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.402 00:06:30.402 real 0m0.028s 00:06:30.402 user 0m0.017s 00:06:30.402 sys 0m0.011s 00:06:30.402 14:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.402 ************************************ 00:06:30.402 END TEST accel_wrong_workload 00:06:30.402 ************************************ 00:06:30.402 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.402 14:52:53 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.402 14:52:53 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:30.402 14:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.402 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.402 ************************************ 00:06:30.402 START TEST accel_negative_buffers 00:06:30.402 ************************************ 00:06:30.402 14:52:53 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.402 14:52:53 -- common/autotest_common.sh@650 -- # local es=0 00:06:30.402 14:52:53 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:30.402 14:52:53 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:30.402 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.402 14:52:53 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:30.402 14:52:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.402 14:52:53 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:30.402 14:52:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:30.402 14:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.402 14:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.402 14:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.402 14:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.402 14:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.402 14:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.402 14:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.402 14:52:53 -- accel/accel.sh@42 -- # jq -r . 00:06:30.402 -x option must be non-negative. 00:06:30.402 [2024-11-18 14:52:53.941145] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:30.402 accel_perf options: 00:06:30.402 [-h help message] 00:06:30.402 [-q queue depth per core] 00:06:30.402 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.402 [-T number of threads per core 00:06:30.402 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.402 [-t time in seconds] 00:06:30.402 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.402 [ dif_verify, , dif_generate, dif_generate_copy 00:06:30.402 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.402 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.402 [-S for crc32c workload, use this seed value (default 0) 00:06:30.402 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.402 [-f for fill workload, use this BYTE value (default 255) 00:06:30.402 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.402 [-y verify result if this switch is on] 00:06:30.402 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.402 Can be used to spread operations across a wider range of memory. 00:06:30.402 14:52:53 -- common/autotest_common.sh@653 -- # es=1 00:06:30.402 14:52:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.402 14:52:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.402 14:52:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.402 00:06:30.402 real 0m0.026s 00:06:30.402 user 0m0.017s 00:06:30.402 sys 0m0.009s 00:06:30.402 ************************************ 00:06:30.402 14:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.402 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.402 END TEST accel_negative_buffers 00:06:30.402 ************************************ 00:06:30.402 14:52:53 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:30.402 14:52:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:30.402 14:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.402 14:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.662 ************************************ 00:06:30.662 START TEST accel_crc32c 00:06:30.662 ************************************ 00:06:30.662 14:52:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:30.662 14:52:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.662 14:52:53 -- accel/accel.sh@17 -- # local accel_module 00:06:30.662 14:52:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:30.662 14:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.662 14:52:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:30.662 14:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.662 14:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.662 14:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.662 14:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.662 14:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.662 14:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.662 14:52:53 -- accel/accel.sh@42 -- # jq -r . 00:06:30.662 [2024-11-18 14:52:54.017372] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.662 [2024-11-18 14:52:54.017476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70533 ] 00:06:30.662 [2024-11-18 14:52:54.155991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.662 [2024-11-18 14:52:54.215799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.039 14:52:55 -- accel/accel.sh@18 -- # out=' 00:06:32.039 SPDK Configuration: 00:06:32.039 Core mask: 0x1 00:06:32.039 00:06:32.039 Accel Perf Configuration: 00:06:32.039 Workload Type: crc32c 00:06:32.039 CRC-32C seed: 32 00:06:32.039 Transfer size: 4096 bytes 00:06:32.039 Vector count 1 00:06:32.039 Module: software 00:06:32.039 Queue depth: 32 00:06:32.039 Allocate depth: 32 00:06:32.039 # threads/core: 1 00:06:32.039 Run time: 1 seconds 00:06:32.039 Verify: Yes 00:06:32.039 00:06:32.039 Running for 1 seconds... 00:06:32.039 00:06:32.039 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.039 ------------------------------------------------------------------------------------ 00:06:32.039 0,0 569696/s 2225 MiB/s 0 0 00:06:32.039 ==================================================================================== 00:06:32.039 Total 569696/s 2225 MiB/s 0 0' 00:06:32.039 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.039 14:52:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:32.039 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.039 14:52:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:32.039 14:52:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.039 14:52:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.039 14:52:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.039 14:52:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.039 14:52:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.039 14:52:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.039 14:52:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.039 14:52:55 -- accel/accel.sh@42 -- # jq -r . 00:06:32.039 [2024-11-18 14:52:55.423011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.039 [2024-11-18 14:52:55.423106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70553 ] 00:06:32.039 [2024-11-18 14:52:55.560593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.039 [2024-11-18 14:52:55.616841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.298 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val=0x1 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val=crc32c 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val=32 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val=software 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val=32 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val=32 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val=1 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val=Yes 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:32.299 14:52:55 -- accel/accel.sh@21 -- # val= 00:06:32.299 14:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:32.299 14:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:33.236 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:33.236 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:33.236 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:33.236 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:33.236 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:33.236 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:33.236 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:33.236 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:33.236 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:33.236 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:33.236 14:52:56 -- accel/accel.sh@21 -- # val= 00:06:33.236 14:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:33.236 14:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:33.236 14:52:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.236 14:52:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:33.236 14:52:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.236 00:06:33.236 real 0m2.804s 00:06:33.236 user 0m2.392s 00:06:33.236 sys 0m0.214s 00:06:33.236 14:52:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.236 ************************************ 00:06:33.236 END TEST accel_crc32c 00:06:33.236 ************************************ 00:06:33.236 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:33.495 14:52:56 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:33.495 14:52:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:33.495 14:52:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.495 14:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:33.495 ************************************ 00:06:33.495 START TEST accel_crc32c_C2 00:06:33.495 ************************************ 00:06:33.495 14:52:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:33.495 14:52:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.495 14:52:56 -- accel/accel.sh@17 -- # local accel_module 00:06:33.495 14:52:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:33.495 14:52:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.495 14:52:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:33.495 14:52:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.495 14:52:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.495 14:52:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.495 14:52:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.495 14:52:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.495 14:52:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.495 14:52:56 -- accel/accel.sh@42 -- # jq -r . 00:06:33.495 [2024-11-18 14:52:56.874417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.495 [2024-11-18 14:52:56.874514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70587 ] 00:06:33.495 [2024-11-18 14:52:57.011982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.495 [2024-11-18 14:52:57.075471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.872 14:52:58 -- accel/accel.sh@18 -- # out=' 00:06:34.872 SPDK Configuration: 00:06:34.872 Core mask: 0x1 00:06:34.872 00:06:34.872 Accel Perf Configuration: 00:06:34.872 Workload Type: crc32c 00:06:34.872 CRC-32C seed: 0 00:06:34.872 Transfer size: 4096 bytes 00:06:34.872 Vector count 2 00:06:34.872 Module: software 00:06:34.872 Queue depth: 32 00:06:34.872 Allocate depth: 32 00:06:34.872 # threads/core: 1 00:06:34.872 Run time: 1 seconds 00:06:34.872 Verify: Yes 00:06:34.872 00:06:34.872 Running for 1 seconds... 00:06:34.872 00:06:34.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.872 ------------------------------------------------------------------------------------ 00:06:34.872 0,0 426816/s 3334 MiB/s 0 0 00:06:34.872 ==================================================================================== 00:06:34.872 Total 426816/s 1667 MiB/s 0 0' 00:06:34.872 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:34.872 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:34.872 14:52:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:34.872 14:52:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:34.872 14:52:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.872 14:52:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.872 14:52:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.872 14:52:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.872 14:52:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.872 14:52:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.872 14:52:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.872 14:52:58 -- accel/accel.sh@42 -- # jq -r . 00:06:34.872 [2024-11-18 14:52:58.290006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.872 [2024-11-18 14:52:58.290576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70601 ] 00:06:34.872 [2024-11-18 14:52:58.430839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.131 [2024-11-18 14:52:58.496997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val= 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val= 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val=0x1 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val= 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val= 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val=crc32c 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val=0 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val= 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val=software 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val=32 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val=32 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val=1 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val=Yes 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val= 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:35.131 14:52:58 -- accel/accel.sh@21 -- # val= 00:06:35.131 14:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:35.131 14:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:36.506 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:36.506 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.506 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:36.506 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.506 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:36.506 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.506 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:36.506 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.506 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:36.506 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.506 14:52:59 -- accel/accel.sh@21 -- # val= 00:06:36.506 14:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:36.506 14:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:36.506 14:52:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.506 14:52:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:36.506 14:52:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.506 00:06:36.506 real 0m2.843s 00:06:36.506 user 0m2.404s 00:06:36.506 sys 0m0.240s 00:06:36.506 ************************************ 00:06:36.506 END TEST accel_crc32c_C2 00:06:36.506 ************************************ 00:06:36.506 14:52:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.506 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:36.506 14:52:59 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:36.506 14:52:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:36.506 14:52:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.506 14:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:36.506 ************************************ 00:06:36.506 START TEST accel_copy 00:06:36.506 ************************************ 00:06:36.506 14:52:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:36.506 14:52:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.506 14:52:59 -- accel/accel.sh@17 -- # local accel_module 00:06:36.506 14:52:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:36.506 14:52:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:36.506 14:52:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.506 14:52:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.506 14:52:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.506 14:52:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.506 14:52:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.506 14:52:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.506 14:52:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.506 14:52:59 -- accel/accel.sh@42 -- # jq -r . 00:06:36.506 [2024-11-18 14:52:59.768277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.506 [2024-11-18 14:52:59.768386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70641 ] 00:06:36.506 [2024-11-18 14:52:59.904817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.506 [2024-11-18 14:52:59.959219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.882 14:53:01 -- accel/accel.sh@18 -- # out=' 00:06:37.882 SPDK Configuration: 00:06:37.882 Core mask: 0x1 00:06:37.882 00:06:37.882 Accel Perf Configuration: 00:06:37.882 Workload Type: copy 00:06:37.882 Transfer size: 4096 bytes 00:06:37.882 Vector count 1 00:06:37.882 Module: software 00:06:37.882 Queue depth: 32 00:06:37.882 Allocate depth: 32 00:06:37.882 # threads/core: 1 00:06:37.882 Run time: 1 seconds 00:06:37.882 Verify: Yes 00:06:37.882 00:06:37.882 Running for 1 seconds... 00:06:37.882 00:06:37.882 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.882 ------------------------------------------------------------------------------------ 00:06:37.882 0,0 393440/s 1536 MiB/s 0 0 00:06:37.882 ==================================================================================== 00:06:37.882 Total 393440/s 1536 MiB/s 0 0' 00:06:37.882 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.882 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.882 14:53:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:37.882 14:53:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:37.882 14:53:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.882 14:53:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.882 14:53:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.882 14:53:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.882 14:53:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.882 14:53:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.882 14:53:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.882 14:53:01 -- accel/accel.sh@42 -- # jq -r . 00:06:37.883 [2024-11-18 14:53:01.162930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.883 [2024-11-18 14:53:01.163027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70655 ] 00:06:37.883 [2024-11-18 14:53:01.299077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.883 [2024-11-18 14:53:01.352131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val=0x1 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val=copy 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val=software 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val=32 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val=32 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val=1 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val=Yes 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:37.883 14:53:01 -- accel/accel.sh@21 -- # val= 00:06:37.883 14:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:37.883 14:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:39.260 14:53:02 -- accel/accel.sh@21 -- # val= 00:06:39.260 14:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:39.260 14:53:02 -- accel/accel.sh@21 -- # val= 00:06:39.260 14:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:39.260 14:53:02 -- accel/accel.sh@21 -- # val= 00:06:39.260 14:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:39.260 14:53:02 -- accel/accel.sh@21 -- # val= 00:06:39.260 14:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:39.260 14:53:02 -- accel/accel.sh@21 -- # val= 00:06:39.260 14:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:39.260 14:53:02 -- accel/accel.sh@21 -- # val= 00:06:39.260 14:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:39.260 14:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:39.260 14:53:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.260 14:53:02 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:39.260 14:53:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.260 00:06:39.260 real 0m2.790s 00:06:39.260 user 0m2.368s 00:06:39.260 sys 0m0.223s 00:06:39.260 ************************************ 00:06:39.260 END TEST accel_copy 00:06:39.260 ************************************ 00:06:39.260 14:53:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.260 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:39.260 14:53:02 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.260 14:53:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:39.260 14:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.260 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:39.260 ************************************ 00:06:39.260 START TEST accel_fill 00:06:39.260 ************************************ 00:06:39.260 14:53:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.260 14:53:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.260 14:53:02 -- accel/accel.sh@17 -- # local accel_module 00:06:39.260 14:53:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.260 14:53:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.260 14:53:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.260 14:53:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.260 14:53:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.260 14:53:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.260 14:53:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.260 14:53:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.260 14:53:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.260 14:53:02 -- accel/accel.sh@42 -- # jq -r . 00:06:39.260 [2024-11-18 14:53:02.603252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.260 [2024-11-18 14:53:02.603350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70690 ] 00:06:39.260 [2024-11-18 14:53:02.732121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.260 [2024-11-18 14:53:02.785902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.639 14:53:03 -- accel/accel.sh@18 -- # out=' 00:06:40.639 SPDK Configuration: 00:06:40.639 Core mask: 0x1 00:06:40.639 00:06:40.639 Accel Perf Configuration: 00:06:40.639 Workload Type: fill 00:06:40.639 Fill pattern: 0x80 00:06:40.639 Transfer size: 4096 bytes 00:06:40.639 Vector count 1 00:06:40.639 Module: software 00:06:40.639 Queue depth: 64 00:06:40.639 Allocate depth: 64 00:06:40.639 # threads/core: 1 00:06:40.639 Run time: 1 seconds 00:06:40.639 Verify: Yes 00:06:40.639 00:06:40.639 Running for 1 seconds... 00:06:40.639 00:06:40.639 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.639 ------------------------------------------------------------------------------------ 00:06:40.639 0,0 570496/s 2228 MiB/s 0 0 00:06:40.639 ==================================================================================== 00:06:40.639 Total 570496/s 2228 MiB/s 0 0' 00:06:40.639 14:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:40.639 14:53:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.639 14:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:40.639 14:53:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.639 14:53:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.639 14:53:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.639 14:53:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.639 14:53:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.639 14:53:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.639 14:53:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.639 14:53:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.639 14:53:03 -- accel/accel.sh@42 -- # jq -r . 00:06:40.639 [2024-11-18 14:53:03.995114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.640 [2024-11-18 14:53:03.995209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70709 ] 00:06:40.640 [2024-11-18 14:53:04.131484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.640 [2024-11-18 14:53:04.185717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.898 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:40.898 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.898 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.898 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.898 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val=0x1 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val=fill 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val=0x80 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val=software 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val=64 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val=64 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val=1 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val=Yes 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:40.899 14:53:04 -- accel/accel.sh@21 -- # val= 00:06:40.899 14:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:40.899 14:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:41.835 14:53:05 -- accel/accel.sh@21 -- # val= 00:06:41.836 14:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.836 14:53:05 -- accel/accel.sh@21 -- # val= 00:06:41.836 14:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.836 14:53:05 -- accel/accel.sh@21 -- # val= 00:06:41.836 14:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.836 14:53:05 -- accel/accel.sh@21 -- # val= 00:06:41.836 14:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.836 14:53:05 -- accel/accel.sh@21 -- # val= 00:06:41.836 14:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.836 14:53:05 -- accel/accel.sh@21 -- # val= 00:06:41.836 14:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:41.836 14:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:41.836 14:53:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.836 14:53:05 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:41.836 14:53:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.836 00:06:41.836 real 0m2.799s 00:06:41.836 user 0m2.394s 00:06:41.836 sys 0m0.210s 00:06:41.836 14:53:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.836 ************************************ 00:06:41.836 END TEST accel_fill 00:06:41.836 ************************************ 00:06:41.836 14:53:05 -- common/autotest_common.sh@10 -- # set +x 00:06:42.094 14:53:05 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:42.094 14:53:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.094 14:53:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.094 14:53:05 -- common/autotest_common.sh@10 -- # set +x 00:06:42.094 ************************************ 00:06:42.094 START TEST accel_copy_crc32c 00:06:42.094 ************************************ 00:06:42.094 14:53:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:42.094 14:53:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.094 14:53:05 -- accel/accel.sh@17 -- # local accel_module 00:06:42.094 14:53:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:42.094 14:53:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:42.094 14:53:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.094 14:53:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.094 14:53:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.094 14:53:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.094 14:53:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.094 14:53:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.094 14:53:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.094 14:53:05 -- accel/accel.sh@42 -- # jq -r . 00:06:42.094 [2024-11-18 14:53:05.454355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.094 [2024-11-18 14:53:05.454453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70740 ] 00:06:42.094 [2024-11-18 14:53:05.584286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.094 [2024-11-18 14:53:05.638705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.471 14:53:06 -- accel/accel.sh@18 -- # out=' 00:06:43.471 SPDK Configuration: 00:06:43.471 Core mask: 0x1 00:06:43.471 00:06:43.471 Accel Perf Configuration: 00:06:43.471 Workload Type: copy_crc32c 00:06:43.471 CRC-32C seed: 0 00:06:43.471 Vector size: 4096 bytes 00:06:43.471 Transfer size: 4096 bytes 00:06:43.471 Vector count 1 00:06:43.471 Module: software 00:06:43.471 Queue depth: 32 00:06:43.471 Allocate depth: 32 00:06:43.471 # threads/core: 1 00:06:43.471 Run time: 1 seconds 00:06:43.471 Verify: Yes 00:06:43.471 00:06:43.471 Running for 1 seconds... 00:06:43.471 00:06:43.471 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.471 ------------------------------------------------------------------------------------ 00:06:43.471 0,0 309440/s 1208 MiB/s 0 0 00:06:43.471 ==================================================================================== 00:06:43.471 Total 309440/s 1208 MiB/s 0 0' 00:06:43.471 14:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:43.471 14:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:43.471 14:53:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:43.471 14:53:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:43.471 14:53:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.471 14:53:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.471 14:53:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.471 14:53:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.471 14:53:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.471 14:53:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.471 14:53:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.471 14:53:06 -- accel/accel.sh@42 -- # jq -r . 00:06:43.471 [2024-11-18 14:53:06.841202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.471 [2024-11-18 14:53:06.841307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70760 ] 00:06:43.471 [2024-11-18 14:53:06.979404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.471 [2024-11-18 14:53:07.045653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val=0x1 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val=0 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val=software 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val=32 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val=32 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val=1 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val=Yes 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:43.731 14:53:07 -- accel/accel.sh@21 -- # val= 00:06:43.731 14:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:43.731 14:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:44.666 14:53:08 -- accel/accel.sh@21 -- # val= 00:06:44.666 14:53:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.666 14:53:08 -- accel/accel.sh@20 -- # IFS=: 00:06:44.666 14:53:08 -- accel/accel.sh@20 -- # read -r var val 00:06:44.666 14:53:08 -- accel/accel.sh@21 -- # val= 00:06:44.666 14:53:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.666 14:53:08 -- accel/accel.sh@20 -- # IFS=: 00:06:44.666 14:53:08 -- accel/accel.sh@20 -- # read -r var val 00:06:44.667 14:53:08 -- accel/accel.sh@21 -- # val= 00:06:44.667 14:53:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.667 14:53:08 -- accel/accel.sh@20 -- # IFS=: 00:06:44.667 14:53:08 -- accel/accel.sh@20 -- # read -r var val 00:06:44.667 14:53:08 -- accel/accel.sh@21 -- # val= 00:06:44.667 14:53:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.667 14:53:08 -- accel/accel.sh@20 -- # IFS=: 00:06:44.667 14:53:08 -- accel/accel.sh@20 -- # read -r var val 00:06:44.667 14:53:08 -- accel/accel.sh@21 -- # val= 00:06:44.667 14:53:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.667 14:53:08 -- accel/accel.sh@20 -- # IFS=: 00:06:44.667 14:53:08 -- accel/accel.sh@20 -- # read -r var val 00:06:44.667 14:53:08 -- accel/accel.sh@21 -- # val= 00:06:44.667 14:53:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.667 14:53:08 -- accel/accel.sh@20 -- # IFS=: 00:06:44.667 14:53:08 -- accel/accel.sh@20 -- # read -r var val 00:06:44.667 14:53:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.667 14:53:08 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:44.667 14:53:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.667 00:06:44.667 real 0m2.795s 00:06:44.667 user 0m2.385s 00:06:44.667 sys 0m0.213s 00:06:44.667 ************************************ 00:06:44.667 END TEST accel_copy_crc32c 00:06:44.667 ************************************ 00:06:44.667 14:53:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.667 14:53:08 -- common/autotest_common.sh@10 -- # set +x 00:06:44.926 14:53:08 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:44.926 14:53:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:44.926 14:53:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.926 14:53:08 -- common/autotest_common.sh@10 -- # set +x 00:06:44.926 ************************************ 00:06:44.926 START TEST accel_copy_crc32c_C2 00:06:44.926 ************************************ 00:06:44.926 14:53:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:44.926 14:53:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.926 14:53:08 -- accel/accel.sh@17 -- # local accel_module 00:06:44.926 14:53:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:44.926 14:53:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:44.926 14:53:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.926 14:53:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.926 14:53:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.926 14:53:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.926 14:53:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.926 14:53:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.926 14:53:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.926 14:53:08 -- accel/accel.sh@42 -- # jq -r . 00:06:44.926 [2024-11-18 14:53:08.304792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.926 [2024-11-18 14:53:08.304888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70794 ] 00:06:44.926 [2024-11-18 14:53:08.440374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.926 [2024-11-18 14:53:08.498396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.302 14:53:09 -- accel/accel.sh@18 -- # out=' 00:06:46.302 SPDK Configuration: 00:06:46.302 Core mask: 0x1 00:06:46.302 00:06:46.302 Accel Perf Configuration: 00:06:46.302 Workload Type: copy_crc32c 00:06:46.302 CRC-32C seed: 0 00:06:46.302 Vector size: 4096 bytes 00:06:46.302 Transfer size: 8192 bytes 00:06:46.302 Vector count 2 00:06:46.302 Module: software 00:06:46.302 Queue depth: 32 00:06:46.302 Allocate depth: 32 00:06:46.302 # threads/core: 1 00:06:46.302 Run time: 1 seconds 00:06:46.302 Verify: Yes 00:06:46.302 00:06:46.302 Running for 1 seconds... 00:06:46.302 00:06:46.302 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.302 ------------------------------------------------------------------------------------ 00:06:46.302 0,0 212096/s 1657 MiB/s 0 0 00:06:46.302 ==================================================================================== 00:06:46.302 Total 212096/s 828 MiB/s 0 0' 00:06:46.302 14:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:46.302 14:53:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:46.302 14:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:46.302 14:53:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:46.302 14:53:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.302 14:53:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.302 14:53:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.302 14:53:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.302 14:53:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.302 14:53:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.302 14:53:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.302 14:53:09 -- accel/accel.sh@42 -- # jq -r . 00:06:46.302 [2024-11-18 14:53:09.755938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.302 [2024-11-18 14:53:09.756044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70814 ] 00:06:46.561 [2024-11-18 14:53:09.893418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.561 [2024-11-18 14:53:09.954762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.561 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.561 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.561 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.561 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.561 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.561 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.561 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.561 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.561 14:53:10 -- accel/accel.sh@21 -- # val=0x1 00:06:46.561 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.561 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.561 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.561 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.561 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val=0 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val=software 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val=32 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val=32 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val=1 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val=Yes 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:46.562 14:53:10 -- accel/accel.sh@21 -- # val= 00:06:46.562 14:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:46.562 14:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:47.939 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:47.939 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.939 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:47.939 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:47.939 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:47.939 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.939 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:47.939 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:47.939 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:47.939 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.939 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:47.940 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:47.940 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:47.940 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.940 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:47.940 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:47.940 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:47.940 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.940 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:47.940 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:47.940 14:53:11 -- accel/accel.sh@21 -- # val= 00:06:47.940 ************************************ 00:06:47.940 END TEST accel_copy_crc32c_C2 00:06:47.940 ************************************ 00:06:47.940 14:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.940 14:53:11 -- accel/accel.sh@20 -- # IFS=: 00:06:47.940 14:53:11 -- accel/accel.sh@20 -- # read -r var val 00:06:47.940 14:53:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.940 14:53:11 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:47.940 14:53:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.940 00:06:47.940 real 0m2.877s 00:06:47.940 user 0m2.440s 00:06:47.940 sys 0m0.237s 00:06:47.940 14:53:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.940 14:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:47.940 14:53:11 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:47.940 14:53:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.940 14:53:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.940 14:53:11 -- common/autotest_common.sh@10 -- # set +x 00:06:47.940 ************************************ 00:06:47.940 START TEST accel_dualcast 00:06:47.940 ************************************ 00:06:47.940 14:53:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:47.940 14:53:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.940 14:53:11 -- accel/accel.sh@17 -- # local accel_module 00:06:47.940 14:53:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:47.940 14:53:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:47.940 14:53:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.940 14:53:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.940 14:53:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.940 14:53:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.940 14:53:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.940 14:53:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.940 14:53:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.940 14:53:11 -- accel/accel.sh@42 -- # jq -r . 00:06:47.940 [2024-11-18 14:53:11.237211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.940 [2024-11-18 14:53:11.237311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70847 ] 00:06:47.940 [2024-11-18 14:53:11.375959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.940 [2024-11-18 14:53:11.434998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.315 14:53:12 -- accel/accel.sh@18 -- # out=' 00:06:49.315 SPDK Configuration: 00:06:49.315 Core mask: 0x1 00:06:49.315 00:06:49.315 Accel Perf Configuration: 00:06:49.315 Workload Type: dualcast 00:06:49.315 Transfer size: 4096 bytes 00:06:49.315 Vector count 1 00:06:49.315 Module: software 00:06:49.315 Queue depth: 32 00:06:49.315 Allocate depth: 32 00:06:49.315 # threads/core: 1 00:06:49.315 Run time: 1 seconds 00:06:49.315 Verify: Yes 00:06:49.315 00:06:49.315 Running for 1 seconds... 00:06:49.315 00:06:49.315 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.315 ------------------------------------------------------------------------------------ 00:06:49.315 0,0 423744/s 1655 MiB/s 0 0 00:06:49.315 ==================================================================================== 00:06:49.315 Total 423744/s 1655 MiB/s 0 0' 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:49.315 14:53:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:49.315 14:53:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.315 14:53:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.315 14:53:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.315 14:53:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.315 14:53:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.315 14:53:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.315 14:53:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.315 14:53:12 -- accel/accel.sh@42 -- # jq -r . 00:06:49.315 [2024-11-18 14:53:12.644164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.315 [2024-11-18 14:53:12.644259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70862 ] 00:06:49.315 [2024-11-18 14:53:12.780533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.315 [2024-11-18 14:53:12.834411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val= 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val= 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val=0x1 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val= 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val= 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val=dualcast 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val= 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val=software 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val=32 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val=32 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val=1 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.315 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.315 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.315 14:53:12 -- accel/accel.sh@21 -- # val=Yes 00:06:49.574 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.574 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.574 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.574 14:53:12 -- accel/accel.sh@21 -- # val= 00:06:49.574 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.574 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.574 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:49.574 14:53:12 -- accel/accel.sh@21 -- # val= 00:06:49.574 14:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.574 14:53:12 -- accel/accel.sh@20 -- # IFS=: 00:06:49.574 14:53:12 -- accel/accel.sh@20 -- # read -r var val 00:06:50.512 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:50.512 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.512 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:50.512 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.512 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:50.512 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.512 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:50.512 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.512 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:50.512 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.512 14:53:14 -- accel/accel.sh@21 -- # val= 00:06:50.512 14:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # IFS=: 00:06:50.512 14:53:14 -- accel/accel.sh@20 -- # read -r var val 00:06:50.512 14:53:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.512 14:53:14 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:50.512 14:53:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.512 00:06:50.512 real 0m2.811s 00:06:50.512 user 0m2.383s 00:06:50.512 sys 0m0.226s 00:06:50.512 14:53:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.512 14:53:14 -- common/autotest_common.sh@10 -- # set +x 00:06:50.512 ************************************ 00:06:50.512 END TEST accel_dualcast 00:06:50.512 ************************************ 00:06:50.512 14:53:14 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:50.512 14:53:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.512 14:53:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.512 14:53:14 -- common/autotest_common.sh@10 -- # set +x 00:06:50.512 ************************************ 00:06:50.512 START TEST accel_compare 00:06:50.512 ************************************ 00:06:50.512 14:53:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:50.512 14:53:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.512 14:53:14 -- accel/accel.sh@17 -- # local accel_module 00:06:50.512 14:53:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:50.512 14:53:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.512 14:53:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:50.512 14:53:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.512 14:53:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.512 14:53:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.512 14:53:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.512 14:53:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.512 14:53:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.512 14:53:14 -- accel/accel.sh@42 -- # jq -r . 00:06:50.771 [2024-11-18 14:53:14.101201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.771 [2024-11-18 14:53:14.101307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70898 ] 00:06:50.771 [2024-11-18 14:53:14.240008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.771 [2024-11-18 14:53:14.301095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.147 14:53:15 -- accel/accel.sh@18 -- # out=' 00:06:52.147 SPDK Configuration: 00:06:52.147 Core mask: 0x1 00:06:52.147 00:06:52.147 Accel Perf Configuration: 00:06:52.147 Workload Type: compare 00:06:52.147 Transfer size: 4096 bytes 00:06:52.147 Vector count 1 00:06:52.147 Module: software 00:06:52.147 Queue depth: 32 00:06:52.147 Allocate depth: 32 00:06:52.147 # threads/core: 1 00:06:52.147 Run time: 1 seconds 00:06:52.147 Verify: Yes 00:06:52.147 00:06:52.147 Running for 1 seconds... 00:06:52.147 00:06:52.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.147 ------------------------------------------------------------------------------------ 00:06:52.147 0,0 564768/s 2206 MiB/s 0 0 00:06:52.148 ==================================================================================== 00:06:52.148 Total 564768/s 2206 MiB/s 0 0' 00:06:52.148 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.148 14:53:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:52.148 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.148 14:53:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.148 14:53:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:52.148 14:53:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.148 14:53:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.148 14:53:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.148 14:53:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.148 14:53:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.148 14:53:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.148 14:53:15 -- accel/accel.sh@42 -- # jq -r . 00:06:52.148 [2024-11-18 14:53:15.522168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.148 [2024-11-18 14:53:15.522271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70917 ] 00:06:52.148 [2024-11-18 14:53:15.657936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.148 [2024-11-18 14:53:15.714374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val=0x1 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val=compare 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val=software 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val=32 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val=32 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val=1 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val=Yes 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:52.407 14:53:15 -- accel/accel.sh@21 -- # val= 00:06:52.407 14:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # IFS=: 00:06:52.407 14:53:15 -- accel/accel.sh@20 -- # read -r var val 00:06:53.346 14:53:16 -- accel/accel.sh@21 -- # val= 00:06:53.346 14:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:53.346 14:53:16 -- accel/accel.sh@21 -- # val= 00:06:53.346 14:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:53.346 14:53:16 -- accel/accel.sh@21 -- # val= 00:06:53.346 14:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:53.346 14:53:16 -- accel/accel.sh@21 -- # val= 00:06:53.346 14:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:53.346 14:53:16 -- accel/accel.sh@21 -- # val= 00:06:53.346 14:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:53.346 14:53:16 -- accel/accel.sh@21 -- # val= 00:06:53.346 14:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # IFS=: 00:06:53.346 14:53:16 -- accel/accel.sh@20 -- # read -r var val 00:06:53.346 14:53:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.346 14:53:16 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:53.346 14:53:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.346 00:06:53.346 real 0m2.830s 00:06:53.346 user 0m2.394s 00:06:53.346 sys 0m0.236s 00:06:53.346 14:53:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.346 ************************************ 00:06:53.346 END TEST accel_compare 00:06:53.346 ************************************ 00:06:53.346 14:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:53.606 14:53:16 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:53.606 14:53:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:53.606 14:53:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.606 14:53:16 -- common/autotest_common.sh@10 -- # set +x 00:06:53.606 ************************************ 00:06:53.606 START TEST accel_xor 00:06:53.606 ************************************ 00:06:53.606 14:53:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:53.606 14:53:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.606 14:53:16 -- accel/accel.sh@17 -- # local accel_module 00:06:53.606 14:53:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:53.606 14:53:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:53.606 14:53:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.606 14:53:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.606 14:53:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.606 14:53:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.606 14:53:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.606 14:53:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.606 14:53:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.606 14:53:16 -- accel/accel.sh@42 -- # jq -r . 00:06:53.606 [2024-11-18 14:53:16.978291] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.606 [2024-11-18 14:53:16.978430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70953 ] 00:06:53.606 [2024-11-18 14:53:17.101097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.606 [2024-11-18 14:53:17.153048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.983 14:53:18 -- accel/accel.sh@18 -- # out=' 00:06:54.983 SPDK Configuration: 00:06:54.983 Core mask: 0x1 00:06:54.983 00:06:54.983 Accel Perf Configuration: 00:06:54.983 Workload Type: xor 00:06:54.983 Source buffers: 2 00:06:54.983 Transfer size: 4096 bytes 00:06:54.983 Vector count 1 00:06:54.983 Module: software 00:06:54.983 Queue depth: 32 00:06:54.983 Allocate depth: 32 00:06:54.983 # threads/core: 1 00:06:54.983 Run time: 1 seconds 00:06:54.983 Verify: Yes 00:06:54.983 00:06:54.983 Running for 1 seconds... 00:06:54.983 00:06:54.983 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.983 ------------------------------------------------------------------------------------ 00:06:54.983 0,0 291424/s 1138 MiB/s 0 0 00:06:54.983 ==================================================================================== 00:06:54.983 Total 291424/s 1138 MiB/s 0 0' 00:06:54.983 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:54.983 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:54.983 14:53:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:54.983 14:53:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:54.983 14:53:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.983 14:53:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.983 14:53:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.983 14:53:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.983 14:53:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.983 14:53:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.983 14:53:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.983 14:53:18 -- accel/accel.sh@42 -- # jq -r . 00:06:54.983 [2024-11-18 14:53:18.364578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.983 [2024-11-18 14:53:18.364685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70967 ] 00:06:54.983 [2024-11-18 14:53:18.501574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.983 [2024-11-18 14:53:18.566856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val=0x1 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val=xor 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val=2 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val=software 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val=32 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val=32 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val=1 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val=Yes 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:55.242 14:53:18 -- accel/accel.sh@21 -- # val= 00:06:55.242 14:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # IFS=: 00:06:55.242 14:53:18 -- accel/accel.sh@20 -- # read -r var val 00:06:56.180 14:53:19 -- accel/accel.sh@21 -- # val= 00:06:56.180 14:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.180 14:53:19 -- accel/accel.sh@21 -- # val= 00:06:56.180 14:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.180 14:53:19 -- accel/accel.sh@21 -- # val= 00:06:56.180 14:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.180 14:53:19 -- accel/accel.sh@21 -- # val= 00:06:56.180 ************************************ 00:06:56.180 END TEST accel_xor 00:06:56.180 ************************************ 00:06:56.180 14:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.180 14:53:19 -- accel/accel.sh@21 -- # val= 00:06:56.180 14:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.180 14:53:19 -- accel/accel.sh@21 -- # val= 00:06:56.180 14:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # IFS=: 00:06:56.180 14:53:19 -- accel/accel.sh@20 -- # read -r var val 00:06:56.180 14:53:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.180 14:53:19 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:56.180 14:53:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.180 00:06:56.180 real 0m2.803s 00:06:56.180 user 0m2.373s 00:06:56.180 sys 0m0.232s 00:06:56.180 14:53:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.180 14:53:19 -- common/autotest_common.sh@10 -- # set +x 00:06:56.440 14:53:19 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:56.440 14:53:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:56.440 14:53:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.440 14:53:19 -- common/autotest_common.sh@10 -- # set +x 00:06:56.440 ************************************ 00:06:56.440 START TEST accel_xor 00:06:56.440 ************************************ 00:06:56.440 14:53:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:56.440 14:53:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.440 14:53:19 -- accel/accel.sh@17 -- # local accel_module 00:06:56.440 14:53:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:56.440 14:53:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.440 14:53:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:56.440 14:53:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.440 14:53:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.440 14:53:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.440 14:53:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.440 14:53:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.440 14:53:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.440 14:53:19 -- accel/accel.sh@42 -- # jq -r . 00:06:56.440 [2024-11-18 14:53:19.829152] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.440 [2024-11-18 14:53:19.829248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71007 ] 00:06:56.440 [2024-11-18 14:53:19.967513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.698 [2024-11-18 14:53:20.046040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.077 14:53:21 -- accel/accel.sh@18 -- # out=' 00:06:58.077 SPDK Configuration: 00:06:58.077 Core mask: 0x1 00:06:58.077 00:06:58.077 Accel Perf Configuration: 00:06:58.077 Workload Type: xor 00:06:58.077 Source buffers: 3 00:06:58.077 Transfer size: 4096 bytes 00:06:58.077 Vector count 1 00:06:58.077 Module: software 00:06:58.077 Queue depth: 32 00:06:58.077 Allocate depth: 32 00:06:58.077 # threads/core: 1 00:06:58.077 Run time: 1 seconds 00:06:58.077 Verify: Yes 00:06:58.077 00:06:58.077 Running for 1 seconds... 00:06:58.077 00:06:58.077 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.077 ------------------------------------------------------------------------------------ 00:06:58.077 0,0 274944/s 1074 MiB/s 0 0 00:06:58.077 ==================================================================================== 00:06:58.077 Total 274944/s 1074 MiB/s 0 0' 00:06:58.077 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.077 14:53:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:58.077 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.077 14:53:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:58.077 14:53:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.077 14:53:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.077 14:53:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.077 14:53:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.077 14:53:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.077 14:53:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.077 14:53:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.077 14:53:21 -- accel/accel.sh@42 -- # jq -r . 00:06:58.077 [2024-11-18 14:53:21.252488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.078 [2024-11-18 14:53:21.252740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71021 ] 00:06:58.078 [2024-11-18 14:53:21.388782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.078 [2024-11-18 14:53:21.444911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val=0x1 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val=xor 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val=3 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val=software 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val=32 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val=32 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val=1 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val=Yes 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:58.078 14:53:21 -- accel/accel.sh@21 -- # val= 00:06:58.078 14:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # IFS=: 00:06:58.078 14:53:21 -- accel/accel.sh@20 -- # read -r var val 00:06:59.454 14:53:22 -- accel/accel.sh@21 -- # val= 00:06:59.454 14:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.454 14:53:22 -- accel/accel.sh@21 -- # val= 00:06:59.454 14:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.454 14:53:22 -- accel/accel.sh@21 -- # val= 00:06:59.454 14:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.454 14:53:22 -- accel/accel.sh@21 -- # val= 00:06:59.454 14:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.454 14:53:22 -- accel/accel.sh@21 -- # val= 00:06:59.454 14:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.454 14:53:22 -- accel/accel.sh@21 -- # val= 00:06:59.454 14:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # IFS=: 00:06:59.454 14:53:22 -- accel/accel.sh@20 -- # read -r var val 00:06:59.454 14:53:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.454 14:53:22 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:59.454 14:53:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.454 00:06:59.454 real 0m2.830s 00:06:59.454 user 0m2.399s 00:06:59.454 sys 0m0.230s 00:06:59.454 14:53:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.454 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:06:59.454 ************************************ 00:06:59.454 END TEST accel_xor 00:06:59.454 ************************************ 00:06:59.454 14:53:22 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:59.454 14:53:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:59.454 14:53:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.454 14:53:22 -- common/autotest_common.sh@10 -- # set +x 00:06:59.454 ************************************ 00:06:59.454 START TEST accel_dif_verify 00:06:59.454 ************************************ 00:06:59.454 14:53:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:59.454 14:53:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.454 14:53:22 -- accel/accel.sh@17 -- # local accel_module 00:06:59.454 14:53:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:59.454 14:53:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:59.454 14:53:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.454 14:53:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.454 14:53:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.454 14:53:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.454 14:53:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.454 14:53:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.454 14:53:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.454 14:53:22 -- accel/accel.sh@42 -- # jq -r . 00:06:59.454 [2024-11-18 14:53:22.708741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.454 [2024-11-18 14:53:22.708815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71050 ] 00:06:59.454 [2024-11-18 14:53:22.838541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.454 [2024-11-18 14:53:22.904659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.925 14:53:24 -- accel/accel.sh@18 -- # out=' 00:07:00.925 SPDK Configuration: 00:07:00.925 Core mask: 0x1 00:07:00.925 00:07:00.925 Accel Perf Configuration: 00:07:00.925 Workload Type: dif_verify 00:07:00.925 Vector size: 4096 bytes 00:07:00.925 Transfer size: 4096 bytes 00:07:00.925 Block size: 512 bytes 00:07:00.925 Metadata size: 8 bytes 00:07:00.925 Vector count 1 00:07:00.925 Module: software 00:07:00.925 Queue depth: 32 00:07:00.925 Allocate depth: 32 00:07:00.925 # threads/core: 1 00:07:00.925 Run time: 1 seconds 00:07:00.925 Verify: No 00:07:00.925 00:07:00.925 Running for 1 seconds... 00:07:00.925 00:07:00.925 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.925 ------------------------------------------------------------------------------------ 00:07:00.925 0,0 125696/s 498 MiB/s 0 0 00:07:00.925 ==================================================================================== 00:07:00.925 Total 125696/s 491 MiB/s 0 0' 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:00.925 14:53:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.925 14:53:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.925 14:53:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.925 14:53:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.925 14:53:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.925 14:53:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.925 14:53:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.925 14:53:24 -- accel/accel.sh@42 -- # jq -r . 00:07:00.925 [2024-11-18 14:53:24.113574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.925 [2024-11-18 14:53:24.113676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71075 ] 00:07:00.925 [2024-11-18 14:53:24.250543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.925 [2024-11-18 14:53:24.308050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val=0x1 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val=dif_verify 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val=software 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val=32 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val=32 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val=1 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.925 14:53:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.925 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.925 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.926 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.926 14:53:24 -- accel/accel.sh@21 -- # val=No 00:07:00.926 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.926 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.926 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.926 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:00.926 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.926 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.926 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:00.926 14:53:24 -- accel/accel.sh@21 -- # val= 00:07:00.926 14:53:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.926 14:53:24 -- accel/accel.sh@20 -- # IFS=: 00:07:00.926 14:53:24 -- accel/accel.sh@20 -- # read -r var val 00:07:02.300 14:53:25 -- accel/accel.sh@21 -- # val= 00:07:02.300 14:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:02.300 14:53:25 -- accel/accel.sh@21 -- # val= 00:07:02.300 14:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:02.300 14:53:25 -- accel/accel.sh@21 -- # val= 00:07:02.300 14:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:02.300 14:53:25 -- accel/accel.sh@21 -- # val= 00:07:02.300 14:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:02.300 14:53:25 -- accel/accel.sh@21 -- # val= 00:07:02.300 14:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:02.300 14:53:25 -- accel/accel.sh@21 -- # val= 00:07:02.300 14:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:02.300 14:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:02.300 14:53:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.300 14:53:25 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:02.300 14:53:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.300 00:07:02.300 real 0m2.807s 00:07:02.300 user 0m2.398s 00:07:02.300 sys 0m0.213s 00:07:02.300 14:53:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.300 14:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:02.300 ************************************ 00:07:02.300 END TEST accel_dif_verify 00:07:02.300 ************************************ 00:07:02.300 14:53:25 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:02.300 14:53:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:02.300 14:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.300 14:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:02.300 ************************************ 00:07:02.300 START TEST accel_dif_generate 00:07:02.300 ************************************ 00:07:02.300 14:53:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:02.300 14:53:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.300 14:53:25 -- accel/accel.sh@17 -- # local accel_module 00:07:02.300 14:53:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:02.300 14:53:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:02.300 14:53:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.300 14:53:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.300 14:53:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.300 14:53:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.300 14:53:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.300 14:53:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.300 14:53:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.300 14:53:25 -- accel/accel.sh@42 -- # jq -r . 00:07:02.300 [2024-11-18 14:53:25.568415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.300 [2024-11-18 14:53:25.568512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71104 ] 00:07:02.300 [2024-11-18 14:53:25.704949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.300 [2024-11-18 14:53:25.763945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.684 14:53:26 -- accel/accel.sh@18 -- # out=' 00:07:03.684 SPDK Configuration: 00:07:03.684 Core mask: 0x1 00:07:03.684 00:07:03.684 Accel Perf Configuration: 00:07:03.684 Workload Type: dif_generate 00:07:03.684 Vector size: 4096 bytes 00:07:03.684 Transfer size: 4096 bytes 00:07:03.684 Block size: 512 bytes 00:07:03.684 Metadata size: 8 bytes 00:07:03.684 Vector count 1 00:07:03.684 Module: software 00:07:03.684 Queue depth: 32 00:07:03.684 Allocate depth: 32 00:07:03.684 # threads/core: 1 00:07:03.684 Run time: 1 seconds 00:07:03.684 Verify: No 00:07:03.684 00:07:03.684 Running for 1 seconds... 00:07:03.684 00:07:03.684 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.684 ------------------------------------------------------------------------------------ 00:07:03.684 0,0 151840/s 602 MiB/s 0 0 00:07:03.684 ==================================================================================== 00:07:03.684 Total 151840/s 593 MiB/s 0 0' 00:07:03.684 14:53:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:03.684 14:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:03.684 14:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:03.684 14:53:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:03.684 14:53:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.684 14:53:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.684 14:53:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.684 14:53:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.684 14:53:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.684 14:53:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.684 14:53:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.684 14:53:26 -- accel/accel.sh@42 -- # jq -r . 00:07:03.684 [2024-11-18 14:53:26.977107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.684 [2024-11-18 14:53:26.977200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71124 ] 00:07:03.684 [2024-11-18 14:53:27.114884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.684 [2024-11-18 14:53:27.168346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.684 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.684 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.684 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.684 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.684 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.684 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.684 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.684 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.684 14:53:27 -- accel/accel.sh@21 -- # val=0x1 00:07:03.684 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.684 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val=dif_generate 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val=software 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val=32 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val=32 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val=1 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val=No 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:03.685 14:53:27 -- accel/accel.sh@21 -- # val= 00:07:03.685 14:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:03.685 14:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:05.062 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.062 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.062 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.062 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.062 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.062 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.062 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.062 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.062 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.062 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.062 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.062 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.062 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.062 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.062 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.063 ************************************ 00:07:05.063 END TEST accel_dif_generate 00:07:05.063 ************************************ 00:07:05.063 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.063 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.063 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.063 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.063 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.063 14:53:28 -- accel/accel.sh@21 -- # val= 00:07:05.063 14:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.063 14:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:05.063 14:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:05.063 14:53:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.063 14:53:28 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:05.063 14:53:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.063 00:07:05.063 real 0m2.810s 00:07:05.063 user 0m2.386s 00:07:05.063 sys 0m0.220s 00:07:05.063 14:53:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.063 14:53:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.063 14:53:28 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:05.063 14:53:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:05.063 14:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.063 14:53:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.063 ************************************ 00:07:05.063 START TEST accel_dif_generate_copy 00:07:05.063 ************************************ 00:07:05.063 14:53:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:05.063 14:53:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.063 14:53:28 -- accel/accel.sh@17 -- # local accel_module 00:07:05.063 14:53:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:05.063 14:53:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:05.063 14:53:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.063 14:53:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.063 14:53:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.063 14:53:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.063 14:53:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.063 14:53:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.063 14:53:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.063 14:53:28 -- accel/accel.sh@42 -- # jq -r . 00:07:05.063 [2024-11-18 14:53:28.427126] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.063 [2024-11-18 14:53:28.427221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71158 ] 00:07:05.063 [2024-11-18 14:53:28.564456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.063 [2024-11-18 14:53:28.622950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.440 14:53:29 -- accel/accel.sh@18 -- # out=' 00:07:06.440 SPDK Configuration: 00:07:06.440 Core mask: 0x1 00:07:06.440 00:07:06.440 Accel Perf Configuration: 00:07:06.440 Workload Type: dif_generate_copy 00:07:06.441 Vector size: 4096 bytes 00:07:06.441 Transfer size: 4096 bytes 00:07:06.441 Vector count 1 00:07:06.441 Module: software 00:07:06.441 Queue depth: 32 00:07:06.441 Allocate depth: 32 00:07:06.441 # threads/core: 1 00:07:06.441 Run time: 1 seconds 00:07:06.441 Verify: No 00:07:06.441 00:07:06.441 Running for 1 seconds... 00:07:06.441 00:07:06.441 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.441 ------------------------------------------------------------------------------------ 00:07:06.441 0,0 116928/s 463 MiB/s 0 0 00:07:06.441 ==================================================================================== 00:07:06.441 Total 116928/s 456 MiB/s 0 0' 00:07:06.441 14:53:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:06.441 14:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:06.441 14:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:06.441 14:53:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:06.441 14:53:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.441 14:53:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.441 14:53:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.441 14:53:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.441 14:53:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.441 14:53:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.441 14:53:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.441 14:53:29 -- accel/accel.sh@42 -- # jq -r . 00:07:06.441 [2024-11-18 14:53:29.828752] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.441 [2024-11-18 14:53:29.828853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71172 ] 00:07:06.441 [2024-11-18 14:53:29.966684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.700 [2024-11-18 14:53:30.038212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val= 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val= 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val=0x1 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val= 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val= 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val= 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val=software 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val=32 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val=32 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val=1 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val=No 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val= 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:06.700 14:53:30 -- accel/accel.sh@21 -- # val= 00:07:06.700 14:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:06.700 14:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:08.078 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:08.079 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.079 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:08.079 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.079 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:08.079 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.079 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:08.079 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.079 ************************************ 00:07:08.079 END TEST accel_dif_generate_copy 00:07:08.079 ************************************ 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.079 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:08.079 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.079 14:53:31 -- accel/accel.sh@21 -- # val= 00:07:08.079 14:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:08.079 14:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:08.079 14:53:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.079 14:53:31 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:08.079 14:53:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.079 00:07:08.079 real 0m2.829s 00:07:08.079 user 0m2.407s 00:07:08.079 sys 0m0.219s 00:07:08.079 14:53:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.079 14:53:31 -- common/autotest_common.sh@10 -- # set +x 00:07:08.079 14:53:31 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:08.079 14:53:31 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.079 14:53:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:08.079 14:53:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.079 14:53:31 -- common/autotest_common.sh@10 -- # set +x 00:07:08.079 ************************************ 00:07:08.079 START TEST accel_comp 00:07:08.079 ************************************ 00:07:08.079 14:53:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.079 14:53:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.079 14:53:31 -- accel/accel.sh@17 -- # local accel_module 00:07:08.079 14:53:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.079 14:53:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:08.079 14:53:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.079 14:53:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.079 14:53:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.079 14:53:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.079 14:53:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.079 14:53:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.079 14:53:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.079 14:53:31 -- accel/accel.sh@42 -- # jq -r . 00:07:08.079 [2024-11-18 14:53:31.303580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.079 [2024-11-18 14:53:31.303675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71212 ] 00:07:08.079 [2024-11-18 14:53:31.440975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.079 [2024-11-18 14:53:31.495398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.456 14:53:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:09.456 00:07:09.456 SPDK Configuration: 00:07:09.456 Core mask: 0x1 00:07:09.456 00:07:09.456 Accel Perf Configuration: 00:07:09.456 Workload Type: compress 00:07:09.456 Transfer size: 4096 bytes 00:07:09.456 Vector count 1 00:07:09.456 Module: software 00:07:09.456 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.456 Queue depth: 32 00:07:09.456 Allocate depth: 32 00:07:09.456 # threads/core: 1 00:07:09.456 Run time: 1 seconds 00:07:09.456 Verify: No 00:07:09.456 00:07:09.456 Running for 1 seconds... 00:07:09.456 00:07:09.456 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.456 ------------------------------------------------------------------------------------ 00:07:09.456 0,0 60288/s 251 MiB/s 0 0 00:07:09.456 ==================================================================================== 00:07:09.456 Total 60288/s 235 MiB/s 0 0' 00:07:09.456 14:53:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.456 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.456 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.456 14:53:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.456 14:53:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.456 14:53:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.456 14:53:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.456 14:53:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.456 14:53:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.456 14:53:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.456 14:53:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.456 14:53:32 -- accel/accel.sh@42 -- # jq -r . 00:07:09.456 [2024-11-18 14:53:32.695474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.456 [2024-11-18 14:53:32.695756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71226 ] 00:07:09.456 [2024-11-18 14:53:32.824482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.456 [2024-11-18 14:53:32.876504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.456 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.456 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.456 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.456 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.456 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.456 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.456 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.456 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.456 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.456 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.456 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.456 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.456 14:53:32 -- accel/accel.sh@21 -- # val=0x1 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val=compress 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val=software 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val=32 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val=32 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val=1 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val=No 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:09.457 14:53:32 -- accel/accel.sh@21 -- # val= 00:07:09.457 14:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:09.457 14:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:10.834 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.834 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.834 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.834 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.834 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.834 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.834 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.834 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.834 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.834 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.834 14:53:34 -- accel/accel.sh@21 -- # val= 00:07:10.834 14:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:10.834 14:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:10.834 14:53:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.834 14:53:34 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:10.834 14:53:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.834 00:07:10.834 real 0m2.785s 00:07:10.834 user 0m2.379s 00:07:10.834 sys 0m0.206s 00:07:10.834 14:53:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.834 ************************************ 00:07:10.834 END TEST accel_comp 00:07:10.834 ************************************ 00:07:10.834 14:53:34 -- common/autotest_common.sh@10 -- # set +x 00:07:10.834 14:53:34 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:10.834 14:53:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:10.834 14:53:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.834 14:53:34 -- common/autotest_common.sh@10 -- # set +x 00:07:10.834 ************************************ 00:07:10.834 START TEST accel_decomp 00:07:10.834 ************************************ 00:07:10.834 14:53:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:10.834 14:53:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.834 14:53:34 -- accel/accel.sh@17 -- # local accel_module 00:07:10.834 14:53:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:10.834 14:53:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:10.834 14:53:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.834 14:53:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.834 14:53:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.834 14:53:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.834 14:53:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.834 14:53:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.834 14:53:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.834 14:53:34 -- accel/accel.sh@42 -- # jq -r . 00:07:10.834 [2024-11-18 14:53:34.139499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.834 [2024-11-18 14:53:34.139603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71261 ] 00:07:10.834 [2024-11-18 14:53:34.276858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.834 [2024-11-18 14:53:34.346490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.210 14:53:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.210 00:07:12.210 SPDK Configuration: 00:07:12.210 Core mask: 0x1 00:07:12.210 00:07:12.210 Accel Perf Configuration: 00:07:12.210 Workload Type: decompress 00:07:12.210 Transfer size: 4096 bytes 00:07:12.210 Vector count 1 00:07:12.210 Module: software 00:07:12.210 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.210 Queue depth: 32 00:07:12.210 Allocate depth: 32 00:07:12.210 # threads/core: 1 00:07:12.210 Run time: 1 seconds 00:07:12.210 Verify: Yes 00:07:12.210 00:07:12.210 Running for 1 seconds... 00:07:12.210 00:07:12.210 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.210 ------------------------------------------------------------------------------------ 00:07:12.210 0,0 85440/s 157 MiB/s 0 0 00:07:12.210 ==================================================================================== 00:07:12.210 Total 85440/s 333 MiB/s 0 0' 00:07:12.210 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.210 14:53:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:12.210 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.210 14:53:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:12.210 14:53:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.210 14:53:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.210 14:53:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.210 14:53:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.210 14:53:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.210 14:53:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.210 14:53:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.210 14:53:35 -- accel/accel.sh@42 -- # jq -r . 00:07:12.210 [2024-11-18 14:53:35.556439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.210 [2024-11-18 14:53:35.556536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71280 ] 00:07:12.210 [2024-11-18 14:53:35.693943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.210 [2024-11-18 14:53:35.749891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.469 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.469 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.469 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.469 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.469 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.469 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.469 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.469 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.469 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val=0x1 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val=decompress 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val=software 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val=32 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val=32 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val=1 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val=Yes 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:12.470 14:53:35 -- accel/accel.sh@21 -- # val= 00:07:12.470 14:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:12.470 14:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:13.406 14:53:36 -- accel/accel.sh@21 -- # val= 00:07:13.406 14:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:13.406 14:53:36 -- accel/accel.sh@21 -- # val= 00:07:13.406 14:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:13.406 14:53:36 -- accel/accel.sh@21 -- # val= 00:07:13.406 14:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:13.406 14:53:36 -- accel/accel.sh@21 -- # val= 00:07:13.406 14:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:13.406 14:53:36 -- accel/accel.sh@21 -- # val= 00:07:13.406 14:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:13.406 ************************************ 00:07:13.406 END TEST accel_decomp 00:07:13.406 ************************************ 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:13.406 14:53:36 -- accel/accel.sh@21 -- # val= 00:07:13.406 14:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:13.406 14:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:13.406 14:53:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.406 14:53:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:13.406 14:53:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.406 00:07:13.406 real 0m2.824s 00:07:13.406 user 0m2.403s 00:07:13.406 sys 0m0.218s 00:07:13.406 14:53:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.406 14:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:13.406 14:53:36 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.406 14:53:36 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:13.406 14:53:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.407 14:53:36 -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 ************************************ 00:07:13.407 START TEST accel_decmop_full 00:07:13.407 ************************************ 00:07:13.407 14:53:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.407 14:53:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.407 14:53:36 -- accel/accel.sh@17 -- # local accel_module 00:07:13.407 14:53:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.407 14:53:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:13.407 14:53:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.407 14:53:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.407 14:53:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.407 14:53:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.407 14:53:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.407 14:53:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.407 14:53:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.407 14:53:36 -- accel/accel.sh@42 -- # jq -r . 00:07:13.665 [2024-11-18 14:53:37.011997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.665 [2024-11-18 14:53:37.012101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71309 ] 00:07:13.665 [2024-11-18 14:53:37.150250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.665 [2024-11-18 14:53:37.210948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.042 14:53:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:15.042 00:07:15.042 SPDK Configuration: 00:07:15.042 Core mask: 0x1 00:07:15.042 00:07:15.042 Accel Perf Configuration: 00:07:15.042 Workload Type: decompress 00:07:15.042 Transfer size: 111250 bytes 00:07:15.042 Vector count 1 00:07:15.042 Module: software 00:07:15.042 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.042 Queue depth: 32 00:07:15.042 Allocate depth: 32 00:07:15.042 # threads/core: 1 00:07:15.042 Run time: 1 seconds 00:07:15.042 Verify: Yes 00:07:15.042 00:07:15.042 Running for 1 seconds... 00:07:15.042 00:07:15.042 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.042 ------------------------------------------------------------------------------------ 00:07:15.042 0,0 5664/s 233 MiB/s 0 0 00:07:15.042 ==================================================================================== 00:07:15.042 Total 5664/s 600 MiB/s 0 0' 00:07:15.042 14:53:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:15.042 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.042 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.042 14:53:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:15.042 14:53:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.042 14:53:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.042 14:53:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.042 14:53:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.042 14:53:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.042 14:53:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.042 14:53:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.042 14:53:38 -- accel/accel.sh@42 -- # jq -r . 00:07:15.042 [2024-11-18 14:53:38.433912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.042 [2024-11-18 14:53:38.434019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71329 ] 00:07:15.042 [2024-11-18 14:53:38.571573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.042 [2024-11-18 14:53:38.627223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val=0x1 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val=decompress 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val=software 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val=32 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val=32 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val=1 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val=Yes 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:15.302 14:53:38 -- accel/accel.sh@21 -- # val= 00:07:15.302 14:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:15.302 14:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.238 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.238 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.238 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.238 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.238 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.238 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.238 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.238 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.238 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.238 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.238 14:53:39 -- accel/accel.sh@21 -- # val= 00:07:16.238 14:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:16.238 14:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:16.238 14:53:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.238 14:53:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:16.238 14:53:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.238 00:07:16.238 real 0m2.840s 00:07:16.238 user 0m2.404s 00:07:16.238 sys 0m0.231s 00:07:16.238 14:53:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.497 14:53:39 -- common/autotest_common.sh@10 -- # set +x 00:07:16.497 ************************************ 00:07:16.497 END TEST accel_decmop_full 00:07:16.497 ************************************ 00:07:16.497 14:53:39 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.497 14:53:39 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:16.497 14:53:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.497 14:53:39 -- common/autotest_common.sh@10 -- # set +x 00:07:16.497 ************************************ 00:07:16.497 START TEST accel_decomp_mcore 00:07:16.497 ************************************ 00:07:16.497 14:53:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.497 14:53:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.497 14:53:39 -- accel/accel.sh@17 -- # local accel_module 00:07:16.497 14:53:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.497 14:53:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:16.497 14:53:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.497 14:53:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.497 14:53:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.497 14:53:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.497 14:53:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.497 14:53:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.497 14:53:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.497 14:53:39 -- accel/accel.sh@42 -- # jq -r . 00:07:16.497 [2024-11-18 14:53:39.898263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.497 [2024-11-18 14:53:39.898392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71363 ] 00:07:16.497 [2024-11-18 14:53:40.037616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.755 [2024-11-18 14:53:40.109396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.755 [2024-11-18 14:53:40.109527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.755 [2024-11-18 14:53:40.109669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.755 [2024-11-18 14:53:40.109670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.131 14:53:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.131 00:07:18.131 SPDK Configuration: 00:07:18.131 Core mask: 0xf 00:07:18.131 00:07:18.131 Accel Perf Configuration: 00:07:18.131 Workload Type: decompress 00:07:18.131 Transfer size: 4096 bytes 00:07:18.131 Vector count 1 00:07:18.131 Module: software 00:07:18.131 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.131 Queue depth: 32 00:07:18.131 Allocate depth: 32 00:07:18.131 # threads/core: 1 00:07:18.131 Run time: 1 seconds 00:07:18.131 Verify: Yes 00:07:18.131 00:07:18.131 Running for 1 seconds... 00:07:18.131 00:07:18.131 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.131 ------------------------------------------------------------------------------------ 00:07:18.131 0,0 64320/s 118 MiB/s 0 0 00:07:18.131 3,0 60192/s 110 MiB/s 0 0 00:07:18.131 2,0 58240/s 107 MiB/s 0 0 00:07:18.131 1,0 62464/s 115 MiB/s 0 0 00:07:18.131 ==================================================================================== 00:07:18.131 Total 245216/s 957 MiB/s 0 0' 00:07:18.131 14:53:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:18.131 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.131 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.131 14:53:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:18.131 14:53:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.131 14:53:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.131 14:53:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.131 14:53:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.131 14:53:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.131 14:53:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.131 14:53:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.131 14:53:41 -- accel/accel.sh@42 -- # jq -r . 00:07:18.131 [2024-11-18 14:53:41.326804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.131 [2024-11-18 14:53:41.327029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71386 ] 00:07:18.131 [2024-11-18 14:53:41.457693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.131 [2024-11-18 14:53:41.509086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.131 [2024-11-18 14:53:41.509231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.131 [2024-11-18 14:53:41.509386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.131 [2024-11-18 14:53:41.509394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.131 14:53:41 -- accel/accel.sh@21 -- # val= 00:07:18.131 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.131 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.131 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.131 14:53:41 -- accel/accel.sh@21 -- # val= 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val= 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val=0xf 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val= 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val= 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val=decompress 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val= 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val=software 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val=32 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val=32 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val=1 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val=Yes 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val= 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:18.132 14:53:41 -- accel/accel.sh@21 -- # val= 00:07:18.132 14:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:18.132 14:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@21 -- # val= 00:07:19.508 14:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:19.508 14:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:19.508 14:53:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.508 14:53:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:19.508 14:53:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.508 00:07:19.508 real 0m2.836s 00:07:19.508 user 0m9.165s 00:07:19.508 sys 0m0.252s 00:07:19.508 14:53:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.508 14:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:19.508 ************************************ 00:07:19.508 END TEST accel_decomp_mcore 00:07:19.508 ************************************ 00:07:19.508 14:53:42 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.508 14:53:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:19.508 14:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.508 14:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:19.508 ************************************ 00:07:19.508 START TEST accel_decomp_full_mcore 00:07:19.508 ************************************ 00:07:19.508 14:53:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.508 14:53:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.508 14:53:42 -- accel/accel.sh@17 -- # local accel_module 00:07:19.508 14:53:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.508 14:53:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.508 14:53:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:19.508 14:53:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.508 14:53:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.508 14:53:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.508 14:53:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.508 14:53:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.508 14:53:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.508 14:53:42 -- accel/accel.sh@42 -- # jq -r . 00:07:19.508 [2024-11-18 14:53:42.780058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.508 [2024-11-18 14:53:42.780285] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71423 ] 00:07:19.508 [2024-11-18 14:53:42.910513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.508 [2024-11-18 14:53:42.964532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.508 [2024-11-18 14:53:42.964676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.508 [2024-11-18 14:53:42.964798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.508 [2024-11-18 14:53:42.965086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.883 14:53:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.883 00:07:20.883 SPDK Configuration: 00:07:20.883 Core mask: 0xf 00:07:20.883 00:07:20.883 Accel Perf Configuration: 00:07:20.883 Workload Type: decompress 00:07:20.883 Transfer size: 111250 bytes 00:07:20.883 Vector count 1 00:07:20.883 Module: software 00:07:20.883 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.883 Queue depth: 32 00:07:20.883 Allocate depth: 32 00:07:20.884 # threads/core: 1 00:07:20.884 Run time: 1 seconds 00:07:20.884 Verify: Yes 00:07:20.884 00:07:20.884 Running for 1 seconds... 00:07:20.884 00:07:20.884 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.884 ------------------------------------------------------------------------------------ 00:07:20.884 0,0 4832/s 199 MiB/s 0 0 00:07:20.884 3,0 5280/s 218 MiB/s 0 0 00:07:20.884 2,0 5600/s 231 MiB/s 0 0 00:07:20.884 1,0 5280/s 218 MiB/s 0 0 00:07:20.884 ==================================================================================== 00:07:20.884 Total 20992/s 2227 MiB/s 0 0' 00:07:20.884 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:20.884 14:53:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.884 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:20.884 14:53:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.884 14:53:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.884 14:53:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.884 14:53:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.884 14:53:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.884 14:53:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.884 14:53:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.884 14:53:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.884 14:53:44 -- accel/accel.sh@42 -- # jq -r . 00:07:20.884 [2024-11-18 14:53:44.249338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.884 [2024-11-18 14:53:44.249425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71440 ] 00:07:20.884 [2024-11-18 14:53:44.385341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.884 [2024-11-18 14:53:44.449786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.884 [2024-11-18 14:53:44.449950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.884 [2024-11-18 14:53:44.450087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.884 [2024-11-18 14:53:44.450373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val= 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val= 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val= 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val=0xf 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val= 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val= 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val=decompress 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val= 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val=software 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val=32 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val=32 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val=1 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val=Yes 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val= 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.143 14:53:44 -- accel/accel.sh@21 -- # val= 00:07:21.143 14:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.143 14:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@21 -- # val= 00:07:22.520 14:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:22.520 14:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:22.520 14:53:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.520 ************************************ 00:07:22.520 END TEST accel_decomp_full_mcore 00:07:22.520 ************************************ 00:07:22.520 14:53:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:22.520 14:53:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.520 00:07:22.520 real 0m3.007s 00:07:22.520 user 0m9.597s 00:07:22.520 sys 0m0.274s 00:07:22.520 14:53:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.520 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:22.520 14:53:45 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:22.520 14:53:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:22.520 14:53:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.520 14:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:22.520 ************************************ 00:07:22.520 START TEST accel_decomp_mthread 00:07:22.520 ************************************ 00:07:22.520 14:53:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:22.520 14:53:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.520 14:53:45 -- accel/accel.sh@17 -- # local accel_module 00:07:22.520 14:53:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:22.520 14:53:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:22.520 14:53:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.520 14:53:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.520 14:53:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.520 14:53:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.520 14:53:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.520 14:53:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.520 14:53:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.520 14:53:45 -- accel/accel.sh@42 -- # jq -r . 00:07:22.520 [2024-11-18 14:53:45.838001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.520 [2024-11-18 14:53:45.838088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71478 ] 00:07:22.521 [2024-11-18 14:53:45.973622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.521 [2024-11-18 14:53:46.037330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.897 14:53:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.897 00:07:23.897 SPDK Configuration: 00:07:23.897 Core mask: 0x1 00:07:23.897 00:07:23.897 Accel Perf Configuration: 00:07:23.897 Workload Type: decompress 00:07:23.897 Transfer size: 4096 bytes 00:07:23.897 Vector count 1 00:07:23.897 Module: software 00:07:23.897 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.897 Queue depth: 32 00:07:23.897 Allocate depth: 32 00:07:23.897 # threads/core: 2 00:07:23.897 Run time: 1 seconds 00:07:23.897 Verify: Yes 00:07:23.897 00:07:23.897 Running for 1 seconds... 00:07:23.897 00:07:23.897 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.897 ------------------------------------------------------------------------------------ 00:07:23.897 0,1 43264/s 79 MiB/s 0 0 00:07:23.897 0,0 43136/s 79 MiB/s 0 0 00:07:23.897 ==================================================================================== 00:07:23.897 Total 86400/s 337 MiB/s 0 0' 00:07:23.897 14:53:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:23.897 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:23.897 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:23.897 14:53:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:23.897 14:53:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.897 14:53:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.897 14:53:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.897 14:53:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.897 14:53:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.897 14:53:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.897 14:53:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.897 14:53:47 -- accel/accel.sh@42 -- # jq -r . 00:07:23.897 [2024-11-18 14:53:47.318197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.897 [2024-11-18 14:53:47.318282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71497 ] 00:07:23.897 [2024-11-18 14:53:47.453679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.156 [2024-11-18 14:53:47.517536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.156 14:53:47 -- accel/accel.sh@21 -- # val= 00:07:24.156 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.156 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.156 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.156 14:53:47 -- accel/accel.sh@21 -- # val= 00:07:24.156 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val= 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val=0x1 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val= 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val= 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val=decompress 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val= 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val=software 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val=32 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val=32 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val=2 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val=Yes 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val= 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:24.157 14:53:47 -- accel/accel.sh@21 -- # val= 00:07:24.157 14:53:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # IFS=: 00:07:24.157 14:53:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.534 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.534 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.534 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.534 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.534 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.534 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.534 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.534 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.534 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.534 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.534 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.534 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.534 14:53:48 -- accel/accel.sh@21 -- # val= 00:07:25.534 14:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:25.534 14:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:25.534 14:53:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.534 14:53:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:25.534 14:53:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.534 00:07:25.534 real 0m2.914s 00:07:25.534 user 0m2.449s 00:07:25.534 sys 0m0.259s 00:07:25.534 14:53:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.534 14:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:25.534 ************************************ 00:07:25.534 END TEST accel_decomp_mthread 00:07:25.534 ************************************ 00:07:25.534 14:53:48 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.534 14:53:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:25.534 14:53:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.534 14:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:25.534 ************************************ 00:07:25.534 START TEST accel_deomp_full_mthread 00:07:25.534 ************************************ 00:07:25.534 14:53:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.534 14:53:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.534 14:53:48 -- accel/accel.sh@17 -- # local accel_module 00:07:25.534 14:53:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.534 14:53:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.534 14:53:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.534 14:53:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.534 14:53:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.534 14:53:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.534 14:53:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.534 14:53:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.534 14:53:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.534 14:53:48 -- accel/accel.sh@42 -- # jq -r . 00:07:25.534 [2024-11-18 14:53:48.800212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.534 [2024-11-18 14:53:48.800306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71532 ] 00:07:25.534 [2024-11-18 14:53:48.940250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.534 [2024-11-18 14:53:49.009153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.910 14:53:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.910 00:07:26.910 SPDK Configuration: 00:07:26.910 Core mask: 0x1 00:07:26.910 00:07:26.910 Accel Perf Configuration: 00:07:26.910 Workload Type: decompress 00:07:26.910 Transfer size: 111250 bytes 00:07:26.910 Vector count 1 00:07:26.910 Module: software 00:07:26.910 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.910 Queue depth: 32 00:07:26.910 Allocate depth: 32 00:07:26.910 # threads/core: 2 00:07:26.910 Run time: 1 seconds 00:07:26.910 Verify: Yes 00:07:26.910 00:07:26.910 Running for 1 seconds... 00:07:26.910 00:07:26.910 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.910 ------------------------------------------------------------------------------------ 00:07:26.910 0,1 2848/s 117 MiB/s 0 0 00:07:26.910 0,0 2816/s 116 MiB/s 0 0 00:07:26.910 ==================================================================================== 00:07:26.910 Total 5664/s 600 MiB/s 0 0' 00:07:26.910 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:26.910 14:53:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.910 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:26.910 14:53:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.910 14:53:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.910 14:53:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.910 14:53:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.910 14:53:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.910 14:53:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.910 14:53:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.910 14:53:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.910 14:53:50 -- accel/accel.sh@42 -- # jq -r . 00:07:26.910 [2024-11-18 14:53:50.263860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.910 [2024-11-18 14:53:50.263975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71551 ] 00:07:26.910 [2024-11-18 14:53:50.401793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.910 [2024-11-18 14:53:50.457725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val= 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val= 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val= 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val=0x1 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val= 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val= 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val=decompress 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val= 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val=software 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val=32 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val=32 00:07:27.169 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.169 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.169 14:53:50 -- accel/accel.sh@21 -- # val=2 00:07:27.170 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.170 14:53:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.170 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.170 14:53:50 -- accel/accel.sh@21 -- # val=Yes 00:07:27.170 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.170 14:53:50 -- accel/accel.sh@21 -- # val= 00:07:27.170 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:27.170 14:53:50 -- accel/accel.sh@21 -- # val= 00:07:27.170 14:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:27.170 14:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.107 14:53:51 -- accel/accel.sh@21 -- # val= 00:07:28.107 14:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.107 14:53:51 -- accel/accel.sh@21 -- # val= 00:07:28.107 14:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.107 14:53:51 -- accel/accel.sh@21 -- # val= 00:07:28.107 14:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.107 14:53:51 -- accel/accel.sh@21 -- # val= 00:07:28.107 14:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.107 14:53:51 -- accel/accel.sh@21 -- # val= 00:07:28.107 14:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.107 14:53:51 -- accel/accel.sh@21 -- # val= 00:07:28.107 14:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.107 14:53:51 -- accel/accel.sh@21 -- # val= 00:07:28.107 14:53:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # IFS=: 00:07:28.107 14:53:51 -- accel/accel.sh@20 -- # read -r var val 00:07:28.107 14:53:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.107 14:53:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.107 14:53:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.107 00:07:28.107 real 0m2.890s 00:07:28.107 user 0m2.447s 00:07:28.107 sys 0m0.239s 00:07:28.107 14:53:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.107 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.107 ************************************ 00:07:28.107 END TEST accel_deomp_full_mthread 00:07:28.107 ************************************ 00:07:28.366 14:53:51 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:28.366 14:53:51 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.366 14:53:51 -- accel/accel.sh@129 -- # build_accel_config 00:07:28.366 14:53:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.366 14:53:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.366 14:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.366 14:53:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.366 14:53:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.366 14:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.366 14:53:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.366 14:53:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.366 14:53:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.366 14:53:51 -- accel/accel.sh@42 -- # jq -r . 00:07:28.366 ************************************ 00:07:28.366 START TEST accel_dif_functional_tests 00:07:28.366 ************************************ 00:07:28.366 14:53:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.366 [2024-11-18 14:53:51.772633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.366 [2024-11-18 14:53:51.772731] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71587 ] 00:07:28.366 [2024-11-18 14:53:51.913527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.625 [2024-11-18 14:53:51.982880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.625 [2024-11-18 14:53:51.983023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.625 [2024-11-18 14:53:51.983031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.625 00:07:28.626 00:07:28.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.626 http://cunit.sourceforge.net/ 00:07:28.626 00:07:28.626 00:07:28.626 Suite: accel_dif 00:07:28.626 Test: verify: DIF generated, GUARD check ...passed 00:07:28.626 Test: verify: DIF generated, APPTAG check ...passed 00:07:28.626 Test: verify: DIF generated, REFTAG check ...passed 00:07:28.626 Test: verify: DIF not generated, GUARD check ...passed 00:07:28.626 Test: verify: DIF not generated, APPTAG check ...passed 00:07:28.626 Test: verify: DIF not generated, REFTAG check ...[2024-11-18 14:53:52.073713] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.626 [2024-11-18 14:53:52.073803] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.626 [2024-11-18 14:53:52.073840] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.626 [2024-11-18 14:53:52.073892] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.626 passed 00:07:28.626 Test: verify: APPTAG correct, APPTAG check ...passed[2024-11-18 14:53:52.073918] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.626 [2024-11-18 14:53:52.074004] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.626 00:07:28.626 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:28.626 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-11-18 14:53:52.074201] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:28.626 passed 00:07:28.626 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:28.626 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:28.626 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:28.626 Test: generate copy: DIF generated, GUARD check ...passed 00:07:28.626 Test: generate copy: DIF generated, APTTAG check ...[2024-11-18 14:53:52.074431] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:28.626 passed 00:07:28.626 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:28.626 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:28.626 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:28.626 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:28.626 Test: generate copy: iovecs-len validate ...[2024-11-18 14:53:52.074941] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:28.626 passed 00:07:28.626 Test: generate copy: buffer alignment validate ...passed 00:07:28.626 00:07:28.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.626 suites 1 1 n/a 0 0 00:07:28.626 tests 20 20 20 0 0 00:07:28.626 asserts 204 204 204 0 n/a 00:07:28.626 00:07:28.626 Elapsed time = 0.005 seconds 00:07:28.885 00:07:28.885 real 0m0.526s 00:07:28.885 user 0m0.702s 00:07:28.885 sys 0m0.162s 00:07:28.885 ************************************ 00:07:28.885 END TEST accel_dif_functional_tests 00:07:28.885 ************************************ 00:07:28.885 14:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.885 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:28.885 ************************************ 00:07:28.885 END TEST accel 00:07:28.885 ************************************ 00:07:28.885 00:07:28.885 real 1m1.088s 00:07:28.885 user 1m5.324s 00:07:28.885 sys 0m6.233s 00:07:28.885 14:53:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.885 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:28.885 14:53:52 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:28.885 14:53:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:28.885 14:53:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.885 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:28.885 ************************************ 00:07:28.885 START TEST accel_rpc 00:07:28.885 ************************************ 00:07:28.885 14:53:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:28.885 * Looking for test storage... 00:07:28.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:28.885 14:53:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:28.885 14:53:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:28.885 14:53:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:29.145 14:53:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:29.145 14:53:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:29.145 14:53:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:29.145 14:53:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:29.145 14:53:52 -- scripts/common.sh@335 -- # IFS=.-: 00:07:29.145 14:53:52 -- scripts/common.sh@335 -- # read -ra ver1 00:07:29.145 14:53:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.145 14:53:52 -- scripts/common.sh@336 -- # read -ra ver2 00:07:29.145 14:53:52 -- scripts/common.sh@337 -- # local 'op=<' 00:07:29.145 14:53:52 -- scripts/common.sh@339 -- # ver1_l=2 00:07:29.145 14:53:52 -- scripts/common.sh@340 -- # ver2_l=1 00:07:29.145 14:53:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:29.145 14:53:52 -- scripts/common.sh@343 -- # case "$op" in 00:07:29.145 14:53:52 -- scripts/common.sh@344 -- # : 1 00:07:29.145 14:53:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:29.145 14:53:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.145 14:53:52 -- scripts/common.sh@364 -- # decimal 1 00:07:29.145 14:53:52 -- scripts/common.sh@352 -- # local d=1 00:07:29.145 14:53:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.145 14:53:52 -- scripts/common.sh@354 -- # echo 1 00:07:29.145 14:53:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:29.145 14:53:52 -- scripts/common.sh@365 -- # decimal 2 00:07:29.145 14:53:52 -- scripts/common.sh@352 -- # local d=2 00:07:29.145 14:53:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.145 14:53:52 -- scripts/common.sh@354 -- # echo 2 00:07:29.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.145 14:53:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:29.145 14:53:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:29.145 14:53:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:29.145 14:53:52 -- scripts/common.sh@367 -- # return 0 00:07:29.145 14:53:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.145 14:53:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:29.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.145 --rc genhtml_branch_coverage=1 00:07:29.145 --rc genhtml_function_coverage=1 00:07:29.145 --rc genhtml_legend=1 00:07:29.145 --rc geninfo_all_blocks=1 00:07:29.145 --rc geninfo_unexecuted_blocks=1 00:07:29.145 00:07:29.145 ' 00:07:29.145 14:53:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:29.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.145 --rc genhtml_branch_coverage=1 00:07:29.145 --rc genhtml_function_coverage=1 00:07:29.145 --rc genhtml_legend=1 00:07:29.145 --rc geninfo_all_blocks=1 00:07:29.145 --rc geninfo_unexecuted_blocks=1 00:07:29.145 00:07:29.145 ' 00:07:29.145 14:53:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:29.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.145 --rc genhtml_branch_coverage=1 00:07:29.145 --rc genhtml_function_coverage=1 00:07:29.145 --rc genhtml_legend=1 00:07:29.145 --rc geninfo_all_blocks=1 00:07:29.145 --rc geninfo_unexecuted_blocks=1 00:07:29.145 00:07:29.145 ' 00:07:29.145 14:53:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:29.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.145 --rc genhtml_branch_coverage=1 00:07:29.145 --rc genhtml_function_coverage=1 00:07:29.145 --rc genhtml_legend=1 00:07:29.145 --rc geninfo_all_blocks=1 00:07:29.145 --rc geninfo_unexecuted_blocks=1 00:07:29.145 00:07:29.145 ' 00:07:29.145 14:53:52 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.145 14:53:52 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71664 00:07:29.145 14:53:52 -- accel/accel_rpc.sh@15 -- # waitforlisten 71664 00:07:29.145 14:53:52 -- common/autotest_common.sh@829 -- # '[' -z 71664 ']' 00:07:29.145 14:53:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.145 14:53:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.145 14:53:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.145 14:53:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.145 14:53:52 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:29.145 14:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:29.145 [2024-11-18 14:53:52.548282] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.145 [2024-11-18 14:53:52.548418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71664 ] 00:07:29.145 [2024-11-18 14:53:52.688674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.404 [2024-11-18 14:53:52.744357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.404 [2024-11-18 14:53:52.744580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.972 14:53:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.972 14:53:53 -- common/autotest_common.sh@862 -- # return 0 00:07:29.972 14:53:53 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:29.972 14:53:53 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:29.972 14:53:53 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:29.972 14:53:53 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:29.972 14:53:53 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:29.972 14:53:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:29.972 14:53:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.972 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:29.972 ************************************ 00:07:29.972 START TEST accel_assign_opcode 00:07:29.972 ************************************ 00:07:29.972 14:53:53 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:29.972 14:53:53 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:29.972 14:53:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.972 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:30.230 [2024-11-18 14:53:53.561206] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:30.230 14:53:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.230 14:53:53 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:30.230 14:53:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.230 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:30.230 [2024-11-18 14:53:53.569203] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:30.230 14:53:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.230 14:53:53 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:30.230 14:53:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.230 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:30.230 14:53:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.230 14:53:53 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:30.230 14:53:53 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:30.230 14:53:53 -- accel/accel_rpc.sh@42 -- # grep software 00:07:30.230 14:53:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.230 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:30.230 14:53:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.490 software 00:07:30.490 ************************************ 00:07:30.490 END TEST accel_assign_opcode 00:07:30.490 ************************************ 00:07:30.490 00:07:30.490 real 0m0.273s 00:07:30.490 user 0m0.048s 00:07:30.490 sys 0m0.013s 00:07:30.490 14:53:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.490 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:30.490 14:53:53 -- accel/accel_rpc.sh@55 -- # killprocess 71664 00:07:30.490 14:53:53 -- common/autotest_common.sh@936 -- # '[' -z 71664 ']' 00:07:30.490 14:53:53 -- common/autotest_common.sh@940 -- # kill -0 71664 00:07:30.490 14:53:53 -- common/autotest_common.sh@941 -- # uname 00:07:30.490 14:53:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:30.490 14:53:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71664 00:07:30.490 killing process with pid 71664 00:07:30.490 14:53:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:30.490 14:53:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:30.490 14:53:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71664' 00:07:30.490 14:53:53 -- common/autotest_common.sh@955 -- # kill 71664 00:07:30.490 14:53:53 -- common/autotest_common.sh@960 -- # wait 71664 00:07:30.748 ************************************ 00:07:30.748 END TEST accel_rpc 00:07:30.748 ************************************ 00:07:30.748 00:07:30.748 real 0m1.899s 00:07:30.748 user 0m2.027s 00:07:30.748 sys 0m0.450s 00:07:30.748 14:53:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.748 14:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:30.748 14:53:54 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:30.749 14:53:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:30.749 14:53:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.749 14:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:30.749 ************************************ 00:07:30.749 START TEST app_cmdline 00:07:30.749 ************************************ 00:07:30.749 14:53:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:31.024 * Looking for test storage... 00:07:31.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:31.024 14:53:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:31.024 14:53:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:31.024 14:53:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:31.024 14:53:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:31.024 14:53:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:31.024 14:53:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:31.024 14:53:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:31.024 14:53:54 -- scripts/common.sh@335 -- # IFS=.-: 00:07:31.024 14:53:54 -- scripts/common.sh@335 -- # read -ra ver1 00:07:31.024 14:53:54 -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.024 14:53:54 -- scripts/common.sh@336 -- # read -ra ver2 00:07:31.024 14:53:54 -- scripts/common.sh@337 -- # local 'op=<' 00:07:31.024 14:53:54 -- scripts/common.sh@339 -- # ver1_l=2 00:07:31.024 14:53:54 -- scripts/common.sh@340 -- # ver2_l=1 00:07:31.024 14:53:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:31.024 14:53:54 -- scripts/common.sh@343 -- # case "$op" in 00:07:31.024 14:53:54 -- scripts/common.sh@344 -- # : 1 00:07:31.025 14:53:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:31.025 14:53:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.025 14:53:54 -- scripts/common.sh@364 -- # decimal 1 00:07:31.025 14:53:54 -- scripts/common.sh@352 -- # local d=1 00:07:31.025 14:53:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.025 14:53:54 -- scripts/common.sh@354 -- # echo 1 00:07:31.025 14:53:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:31.025 14:53:54 -- scripts/common.sh@365 -- # decimal 2 00:07:31.025 14:53:54 -- scripts/common.sh@352 -- # local d=2 00:07:31.025 14:53:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.025 14:53:54 -- scripts/common.sh@354 -- # echo 2 00:07:31.025 14:53:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:31.025 14:53:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:31.025 14:53:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:31.025 14:53:54 -- scripts/common.sh@367 -- # return 0 00:07:31.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.025 14:53:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.025 14:53:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:31.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.025 --rc genhtml_branch_coverage=1 00:07:31.025 --rc genhtml_function_coverage=1 00:07:31.025 --rc genhtml_legend=1 00:07:31.025 --rc geninfo_all_blocks=1 00:07:31.025 --rc geninfo_unexecuted_blocks=1 00:07:31.025 00:07:31.025 ' 00:07:31.025 14:53:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:31.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.025 --rc genhtml_branch_coverage=1 00:07:31.025 --rc genhtml_function_coverage=1 00:07:31.025 --rc genhtml_legend=1 00:07:31.025 --rc geninfo_all_blocks=1 00:07:31.025 --rc geninfo_unexecuted_blocks=1 00:07:31.025 00:07:31.025 ' 00:07:31.025 14:53:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:31.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.025 --rc genhtml_branch_coverage=1 00:07:31.025 --rc genhtml_function_coverage=1 00:07:31.025 --rc genhtml_legend=1 00:07:31.025 --rc geninfo_all_blocks=1 00:07:31.025 --rc geninfo_unexecuted_blocks=1 00:07:31.025 00:07:31.025 ' 00:07:31.025 14:53:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:31.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.025 --rc genhtml_branch_coverage=1 00:07:31.025 --rc genhtml_function_coverage=1 00:07:31.025 --rc genhtml_legend=1 00:07:31.025 --rc geninfo_all_blocks=1 00:07:31.025 --rc geninfo_unexecuted_blocks=1 00:07:31.025 00:07:31.025 ' 00:07:31.025 14:53:54 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:31.025 14:53:54 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71782 00:07:31.025 14:53:54 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:31.025 14:53:54 -- app/cmdline.sh@18 -- # waitforlisten 71782 00:07:31.025 14:53:54 -- common/autotest_common.sh@829 -- # '[' -z 71782 ']' 00:07:31.025 14:53:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.025 14:53:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.025 14:53:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.025 14:53:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.025 14:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 [2024-11-18 14:53:54.536301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.025 [2024-11-18 14:53:54.536564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71782 ] 00:07:31.304 [2024-11-18 14:53:54.674192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.304 [2024-11-18 14:53:54.728081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:31.304 [2024-11-18 14:53:54.728562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.877 14:53:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.877 14:53:55 -- common/autotest_common.sh@862 -- # return 0 00:07:31.877 14:53:55 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:32.442 { 00:07:32.442 "fields": { 00:07:32.442 "commit": "c13c99a5e", 00:07:32.442 "major": 24, 00:07:32.442 "minor": 1, 00:07:32.442 "patch": 1, 00:07:32.442 "suffix": "-pre" 00:07:32.442 }, 00:07:32.442 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:32.442 } 00:07:32.443 14:53:55 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:32.443 14:53:55 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:32.443 14:53:55 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:32.443 14:53:55 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:32.443 14:53:55 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:32.443 14:53:55 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:32.443 14:53:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.443 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:32.443 14:53:55 -- app/cmdline.sh@26 -- # sort 00:07:32.443 14:53:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.443 14:53:55 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:32.443 14:53:55 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:32.443 14:53:55 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.443 14:53:55 -- common/autotest_common.sh@650 -- # local es=0 00:07:32.443 14:53:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.443 14:53:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.443 14:53:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.443 14:53:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.443 14:53:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.443 14:53:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.443 14:53:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.443 14:53:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.443 14:53:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:32.443 14:53:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.443 2024/11/18 14:53:56 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:32.443 request: 00:07:32.443 { 00:07:32.443 "method": "env_dpdk_get_mem_stats", 00:07:32.443 "params": {} 00:07:32.443 } 00:07:32.443 Got JSON-RPC error response 00:07:32.443 GoRPCClient: error on JSON-RPC call 00:07:32.702 14:53:56 -- common/autotest_common.sh@653 -- # es=1 00:07:32.702 14:53:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.702 14:53:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.702 14:53:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.702 14:53:56 -- app/cmdline.sh@1 -- # killprocess 71782 00:07:32.702 14:53:56 -- common/autotest_common.sh@936 -- # '[' -z 71782 ']' 00:07:32.702 14:53:56 -- common/autotest_common.sh@940 -- # kill -0 71782 00:07:32.702 14:53:56 -- common/autotest_common.sh@941 -- # uname 00:07:32.702 14:53:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.702 14:53:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71782 00:07:32.702 14:53:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:32.702 killing process with pid 71782 00:07:32.702 14:53:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:32.702 14:53:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71782' 00:07:32.702 14:53:56 -- common/autotest_common.sh@955 -- # kill 71782 00:07:32.702 14:53:56 -- common/autotest_common.sh@960 -- # wait 71782 00:07:32.961 00:07:32.961 real 0m2.112s 00:07:32.961 user 0m2.568s 00:07:32.961 sys 0m0.510s 00:07:32.961 ************************************ 00:07:32.961 END TEST app_cmdline 00:07:32.961 ************************************ 00:07:32.961 14:53:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.961 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:32.961 14:53:56 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:32.961 14:53:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:32.961 14:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.961 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:32.961 ************************************ 00:07:32.961 START TEST version 00:07:32.961 ************************************ 00:07:32.961 14:53:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:32.961 * Looking for test storage... 00:07:32.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:32.961 14:53:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:32.961 14:53:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:32.961 14:53:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:33.220 14:53:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:33.220 14:53:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:33.220 14:53:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:33.220 14:53:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:33.220 14:53:56 -- scripts/common.sh@335 -- # IFS=.-: 00:07:33.220 14:53:56 -- scripts/common.sh@335 -- # read -ra ver1 00:07:33.220 14:53:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.220 14:53:56 -- scripts/common.sh@336 -- # read -ra ver2 00:07:33.220 14:53:56 -- scripts/common.sh@337 -- # local 'op=<' 00:07:33.220 14:53:56 -- scripts/common.sh@339 -- # ver1_l=2 00:07:33.220 14:53:56 -- scripts/common.sh@340 -- # ver2_l=1 00:07:33.220 14:53:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:33.220 14:53:56 -- scripts/common.sh@343 -- # case "$op" in 00:07:33.220 14:53:56 -- scripts/common.sh@344 -- # : 1 00:07:33.220 14:53:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:33.220 14:53:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.220 14:53:56 -- scripts/common.sh@364 -- # decimal 1 00:07:33.220 14:53:56 -- scripts/common.sh@352 -- # local d=1 00:07:33.220 14:53:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.220 14:53:56 -- scripts/common.sh@354 -- # echo 1 00:07:33.220 14:53:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:33.220 14:53:56 -- scripts/common.sh@365 -- # decimal 2 00:07:33.220 14:53:56 -- scripts/common.sh@352 -- # local d=2 00:07:33.220 14:53:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.220 14:53:56 -- scripts/common.sh@354 -- # echo 2 00:07:33.220 14:53:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:33.220 14:53:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:33.220 14:53:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:33.220 14:53:56 -- scripts/common.sh@367 -- # return 0 00:07:33.220 14:53:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.220 14:53:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:33.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.220 --rc genhtml_branch_coverage=1 00:07:33.221 --rc genhtml_function_coverage=1 00:07:33.221 --rc genhtml_legend=1 00:07:33.221 --rc geninfo_all_blocks=1 00:07:33.221 --rc geninfo_unexecuted_blocks=1 00:07:33.221 00:07:33.221 ' 00:07:33.221 14:53:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:33.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.221 --rc genhtml_branch_coverage=1 00:07:33.221 --rc genhtml_function_coverage=1 00:07:33.221 --rc genhtml_legend=1 00:07:33.221 --rc geninfo_all_blocks=1 00:07:33.221 --rc geninfo_unexecuted_blocks=1 00:07:33.221 00:07:33.221 ' 00:07:33.221 14:53:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:33.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.221 --rc genhtml_branch_coverage=1 00:07:33.221 --rc genhtml_function_coverage=1 00:07:33.221 --rc genhtml_legend=1 00:07:33.221 --rc geninfo_all_blocks=1 00:07:33.221 --rc geninfo_unexecuted_blocks=1 00:07:33.221 00:07:33.221 ' 00:07:33.221 14:53:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:33.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.221 --rc genhtml_branch_coverage=1 00:07:33.221 --rc genhtml_function_coverage=1 00:07:33.221 --rc genhtml_legend=1 00:07:33.221 --rc geninfo_all_blocks=1 00:07:33.221 --rc geninfo_unexecuted_blocks=1 00:07:33.221 00:07:33.221 ' 00:07:33.221 14:53:56 -- app/version.sh@17 -- # get_header_version major 00:07:33.221 14:53:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.221 14:53:56 -- app/version.sh@14 -- # cut -f2 00:07:33.221 14:53:56 -- app/version.sh@14 -- # tr -d '"' 00:07:33.221 14:53:56 -- app/version.sh@17 -- # major=24 00:07:33.221 14:53:56 -- app/version.sh@18 -- # get_header_version minor 00:07:33.221 14:53:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.221 14:53:56 -- app/version.sh@14 -- # cut -f2 00:07:33.221 14:53:56 -- app/version.sh@14 -- # tr -d '"' 00:07:33.221 14:53:56 -- app/version.sh@18 -- # minor=1 00:07:33.221 14:53:56 -- app/version.sh@19 -- # get_header_version patch 00:07:33.221 14:53:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.221 14:53:56 -- app/version.sh@14 -- # cut -f2 00:07:33.221 14:53:56 -- app/version.sh@14 -- # tr -d '"' 00:07:33.221 14:53:56 -- app/version.sh@19 -- # patch=1 00:07:33.221 14:53:56 -- app/version.sh@20 -- # get_header_version suffix 00:07:33.221 14:53:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.221 14:53:56 -- app/version.sh@14 -- # cut -f2 00:07:33.221 14:53:56 -- app/version.sh@14 -- # tr -d '"' 00:07:33.221 14:53:56 -- app/version.sh@20 -- # suffix=-pre 00:07:33.221 14:53:56 -- app/version.sh@22 -- # version=24.1 00:07:33.221 14:53:56 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:33.221 14:53:56 -- app/version.sh@25 -- # version=24.1.1 00:07:33.221 14:53:56 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:33.221 14:53:56 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:33.221 14:53:56 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:33.221 14:53:56 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:33.221 14:53:56 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:33.221 00:07:33.221 real 0m0.258s 00:07:33.221 user 0m0.174s 00:07:33.221 sys 0m0.119s 00:07:33.221 14:53:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.221 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.221 ************************************ 00:07:33.221 END TEST version 00:07:33.221 ************************************ 00:07:33.221 14:53:56 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:33.221 14:53:56 -- spdk/autotest.sh@191 -- # uname -s 00:07:33.221 14:53:56 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:33.221 14:53:56 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:33.221 14:53:56 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:33.221 14:53:56 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:33.221 14:53:56 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:33.221 14:53:56 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:33.221 14:53:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:33.221 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.221 14:53:56 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:33.221 14:53:56 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:33.221 14:53:56 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:33.221 14:53:56 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:33.221 14:53:56 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:33.221 14:53:56 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:33.221 14:53:56 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:33.221 14:53:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:33.221 14:53:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.221 14:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 ************************************ 00:07:33.480 START TEST nvmf_tcp 00:07:33.480 ************************************ 00:07:33.480 14:53:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:33.480 * Looking for test storage... 00:07:33.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:33.480 14:53:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:33.480 14:53:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:33.480 14:53:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:33.480 14:53:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:33.480 14:53:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:33.480 14:53:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:33.480 14:53:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:33.480 14:53:56 -- scripts/common.sh@335 -- # IFS=.-: 00:07:33.480 14:53:56 -- scripts/common.sh@335 -- # read -ra ver1 00:07:33.480 14:53:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.480 14:53:56 -- scripts/common.sh@336 -- # read -ra ver2 00:07:33.480 14:53:56 -- scripts/common.sh@337 -- # local 'op=<' 00:07:33.480 14:53:56 -- scripts/common.sh@339 -- # ver1_l=2 00:07:33.480 14:53:56 -- scripts/common.sh@340 -- # ver2_l=1 00:07:33.480 14:53:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:33.480 14:53:56 -- scripts/common.sh@343 -- # case "$op" in 00:07:33.480 14:53:56 -- scripts/common.sh@344 -- # : 1 00:07:33.480 14:53:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:33.480 14:53:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.480 14:53:56 -- scripts/common.sh@364 -- # decimal 1 00:07:33.480 14:53:56 -- scripts/common.sh@352 -- # local d=1 00:07:33.480 14:53:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.480 14:53:56 -- scripts/common.sh@354 -- # echo 1 00:07:33.480 14:53:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:33.480 14:53:56 -- scripts/common.sh@365 -- # decimal 2 00:07:33.480 14:53:56 -- scripts/common.sh@352 -- # local d=2 00:07:33.480 14:53:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.480 14:53:56 -- scripts/common.sh@354 -- # echo 2 00:07:33.480 14:53:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:33.480 14:53:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:33.480 14:53:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:33.480 14:53:56 -- scripts/common.sh@367 -- # return 0 00:07:33.480 14:53:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.480 14:53:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:33.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.480 --rc genhtml_branch_coverage=1 00:07:33.480 --rc genhtml_function_coverage=1 00:07:33.480 --rc genhtml_legend=1 00:07:33.480 --rc geninfo_all_blocks=1 00:07:33.480 --rc geninfo_unexecuted_blocks=1 00:07:33.480 00:07:33.480 ' 00:07:33.480 14:53:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:33.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.480 --rc genhtml_branch_coverage=1 00:07:33.480 --rc genhtml_function_coverage=1 00:07:33.480 --rc genhtml_legend=1 00:07:33.480 --rc geninfo_all_blocks=1 00:07:33.480 --rc geninfo_unexecuted_blocks=1 00:07:33.480 00:07:33.480 ' 00:07:33.480 14:53:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:33.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.480 --rc genhtml_branch_coverage=1 00:07:33.480 --rc genhtml_function_coverage=1 00:07:33.480 --rc genhtml_legend=1 00:07:33.480 --rc geninfo_all_blocks=1 00:07:33.480 --rc geninfo_unexecuted_blocks=1 00:07:33.480 00:07:33.480 ' 00:07:33.480 14:53:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:33.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.480 --rc genhtml_branch_coverage=1 00:07:33.480 --rc genhtml_function_coverage=1 00:07:33.480 --rc genhtml_legend=1 00:07:33.480 --rc geninfo_all_blocks=1 00:07:33.480 --rc geninfo_unexecuted_blocks=1 00:07:33.480 00:07:33.480 ' 00:07:33.480 14:53:56 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:33.480 14:53:56 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:33.480 14:53:56 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.480 14:53:56 -- nvmf/common.sh@7 -- # uname -s 00:07:33.480 14:53:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.480 14:53:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.480 14:53:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.480 14:53:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.480 14:53:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.480 14:53:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.480 14:53:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.480 14:53:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.480 14:53:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.480 14:53:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.480 14:53:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:07:33.480 14:53:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:07:33.480 14:53:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.480 14:53:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.480 14:53:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.481 14:53:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.481 14:53:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.481 14:53:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.481 14:53:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.481 14:53:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.481 14:53:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.481 14:53:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.481 14:53:56 -- paths/export.sh@5 -- # export PATH 00:07:33.481 14:53:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.481 14:53:56 -- nvmf/common.sh@46 -- # : 0 00:07:33.481 14:53:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:33.481 14:53:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:33.481 14:53:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:33.481 14:53:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.481 14:53:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.481 14:53:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:33.481 14:53:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:33.481 14:53:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:33.481 14:53:56 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:33.481 14:53:56 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:33.481 14:53:56 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:33.481 14:53:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.481 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:33.481 14:53:57 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:33.481 14:53:57 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:33.481 14:53:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:33.481 14:53:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.481 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:33.481 ************************************ 00:07:33.481 START TEST nvmf_example 00:07:33.481 ************************************ 00:07:33.481 14:53:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:33.740 * Looking for test storage... 00:07:33.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.740 14:53:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:33.740 14:53:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:33.740 14:53:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:33.740 14:53:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:33.740 14:53:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:33.740 14:53:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:33.740 14:53:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:33.740 14:53:57 -- scripts/common.sh@335 -- # IFS=.-: 00:07:33.740 14:53:57 -- scripts/common.sh@335 -- # read -ra ver1 00:07:33.740 14:53:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.740 14:53:57 -- scripts/common.sh@336 -- # read -ra ver2 00:07:33.740 14:53:57 -- scripts/common.sh@337 -- # local 'op=<' 00:07:33.740 14:53:57 -- scripts/common.sh@339 -- # ver1_l=2 00:07:33.740 14:53:57 -- scripts/common.sh@340 -- # ver2_l=1 00:07:33.740 14:53:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:33.740 14:53:57 -- scripts/common.sh@343 -- # case "$op" in 00:07:33.740 14:53:57 -- scripts/common.sh@344 -- # : 1 00:07:33.740 14:53:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:33.740 14:53:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.740 14:53:57 -- scripts/common.sh@364 -- # decimal 1 00:07:33.740 14:53:57 -- scripts/common.sh@352 -- # local d=1 00:07:33.740 14:53:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.740 14:53:57 -- scripts/common.sh@354 -- # echo 1 00:07:33.740 14:53:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:33.740 14:53:57 -- scripts/common.sh@365 -- # decimal 2 00:07:33.740 14:53:57 -- scripts/common.sh@352 -- # local d=2 00:07:33.740 14:53:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.740 14:53:57 -- scripts/common.sh@354 -- # echo 2 00:07:33.740 14:53:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:33.740 14:53:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:33.740 14:53:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:33.740 14:53:57 -- scripts/common.sh@367 -- # return 0 00:07:33.740 14:53:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.740 14:53:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:33.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.740 --rc genhtml_branch_coverage=1 00:07:33.740 --rc genhtml_function_coverage=1 00:07:33.740 --rc genhtml_legend=1 00:07:33.740 --rc geninfo_all_blocks=1 00:07:33.740 --rc geninfo_unexecuted_blocks=1 00:07:33.740 00:07:33.740 ' 00:07:33.740 14:53:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:33.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.740 --rc genhtml_branch_coverage=1 00:07:33.740 --rc genhtml_function_coverage=1 00:07:33.740 --rc genhtml_legend=1 00:07:33.740 --rc geninfo_all_blocks=1 00:07:33.740 --rc geninfo_unexecuted_blocks=1 00:07:33.740 00:07:33.740 ' 00:07:33.740 14:53:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:33.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.740 --rc genhtml_branch_coverage=1 00:07:33.740 --rc genhtml_function_coverage=1 00:07:33.740 --rc genhtml_legend=1 00:07:33.740 --rc geninfo_all_blocks=1 00:07:33.740 --rc geninfo_unexecuted_blocks=1 00:07:33.741 00:07:33.741 ' 00:07:33.741 14:53:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:33.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.741 --rc genhtml_branch_coverage=1 00:07:33.741 --rc genhtml_function_coverage=1 00:07:33.741 --rc genhtml_legend=1 00:07:33.741 --rc geninfo_all_blocks=1 00:07:33.741 --rc geninfo_unexecuted_blocks=1 00:07:33.741 00:07:33.741 ' 00:07:33.741 14:53:57 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.741 14:53:57 -- nvmf/common.sh@7 -- # uname -s 00:07:33.741 14:53:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.741 14:53:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.741 14:53:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.741 14:53:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.741 14:53:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.741 14:53:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.741 14:53:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.741 14:53:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.741 14:53:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.741 14:53:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.741 14:53:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:07:33.741 14:53:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:07:33.741 14:53:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.741 14:53:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.741 14:53:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.741 14:53:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.741 14:53:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.741 14:53:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.741 14:53:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.741 14:53:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.741 14:53:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.741 14:53:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.741 14:53:57 -- paths/export.sh@5 -- # export PATH 00:07:33.741 14:53:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.741 14:53:57 -- nvmf/common.sh@46 -- # : 0 00:07:33.741 14:53:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:33.741 14:53:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:33.741 14:53:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:33.741 14:53:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.741 14:53:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.741 14:53:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:33.741 14:53:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:33.741 14:53:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:33.741 14:53:57 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:33.741 14:53:57 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:33.741 14:53:57 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:33.741 14:53:57 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:33.741 14:53:57 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:33.741 14:53:57 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:33.741 14:53:57 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:33.741 14:53:57 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:33.741 14:53:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.741 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:33.741 14:53:57 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:33.741 14:53:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:33.741 14:53:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.741 14:53:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:33.741 14:53:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:33.741 14:53:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:33.741 14:53:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.741 14:53:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.741 14:53:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.741 14:53:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:33.741 14:53:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:33.741 14:53:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:33.741 14:53:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:33.741 14:53:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:33.741 14:53:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:33.741 14:53:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.741 14:53:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.741 14:53:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:33.741 14:53:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:33.741 14:53:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:33.741 14:53:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:33.741 14:53:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:33.741 14:53:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.741 14:53:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:33.741 14:53:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:33.741 14:53:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:33.741 14:53:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:33.741 14:53:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:33.741 Cannot find device "nvmf_init_br" 00:07:33.741 14:53:57 -- nvmf/common.sh@153 -- # true 00:07:33.741 14:53:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:33.741 Cannot find device "nvmf_tgt_br" 00:07:33.741 14:53:57 -- nvmf/common.sh@154 -- # true 00:07:33.741 14:53:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.741 Cannot find device "nvmf_tgt_br2" 00:07:33.741 14:53:57 -- nvmf/common.sh@155 -- # true 00:07:33.741 14:53:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:33.741 Cannot find device "nvmf_init_br" 00:07:33.741 14:53:57 -- nvmf/common.sh@156 -- # true 00:07:33.741 14:53:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:33.741 Cannot find device "nvmf_tgt_br" 00:07:33.741 14:53:57 -- nvmf/common.sh@157 -- # true 00:07:33.741 14:53:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:33.741 Cannot find device "nvmf_tgt_br2" 00:07:33.741 14:53:57 -- nvmf/common.sh@158 -- # true 00:07:33.741 14:53:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:33.741 Cannot find device "nvmf_br" 00:07:33.741 14:53:57 -- nvmf/common.sh@159 -- # true 00:07:33.741 14:53:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:34.000 Cannot find device "nvmf_init_if" 00:07:34.000 14:53:57 -- nvmf/common.sh@160 -- # true 00:07:34.000 14:53:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.000 14:53:57 -- nvmf/common.sh@161 -- # true 00:07:34.000 14:53:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.000 14:53:57 -- nvmf/common.sh@162 -- # true 00:07:34.000 14:53:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:34.000 14:53:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:34.000 14:53:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:34.000 14:53:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:34.000 14:53:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.000 14:53:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.000 14:53:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.000 14:53:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:34.000 14:53:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:34.000 14:53:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:34.000 14:53:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:34.000 14:53:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:34.000 14:53:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:34.000 14:53:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.000 14:53:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.000 14:53:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.000 14:53:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:34.000 14:53:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:34.000 14:53:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.000 14:53:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.000 14:53:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.000 14:53:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.259 14:53:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.259 14:53:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:34.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:34.259 00:07:34.259 --- 10.0.0.2 ping statistics --- 00:07:34.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.259 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:34.259 14:53:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:34.259 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.259 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:07:34.259 00:07:34.259 --- 10.0.0.3 ping statistics --- 00:07:34.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.259 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:34.259 14:53:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:07:34.259 00:07:34.259 --- 10.0.0.1 ping statistics --- 00:07:34.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.259 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:34.259 14:53:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.259 14:53:57 -- nvmf/common.sh@421 -- # return 0 00:07:34.259 14:53:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:34.259 14:53:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.259 14:53:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:34.259 14:53:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:34.259 14:53:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.259 14:53:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:34.259 14:53:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:34.259 14:53:57 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:34.259 14:53:57 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:34.259 14:53:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.259 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:34.259 14:53:57 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:34.259 14:53:57 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:34.259 14:53:57 -- target/nvmf_example.sh@34 -- # nvmfpid=72161 00:07:34.259 14:53:57 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:34.259 14:53:57 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.259 14:53:57 -- target/nvmf_example.sh@36 -- # waitforlisten 72161 00:07:34.259 14:53:57 -- common/autotest_common.sh@829 -- # '[' -z 72161 ']' 00:07:34.259 14:53:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.259 14:53:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.259 14:53:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.259 14:53:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.259 14:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:35.194 14:53:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.194 14:53:58 -- common/autotest_common.sh@862 -- # return 0 00:07:35.194 14:53:58 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:35.194 14:53:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.194 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.194 14:53:58 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.194 14:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.194 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.194 14:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.194 14:53:58 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:35.194 14:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.194 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.194 14:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.194 14:53:58 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:35.194 14:53:58 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:35.194 14:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.194 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.194 14:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.194 14:53:58 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:35.194 14:53:58 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.194 14:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.194 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.194 14:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.194 14:53:58 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.194 14:53:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.194 14:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:35.194 14:53:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.194 14:53:58 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:35.194 14:53:58 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:47.398 Initializing NVMe Controllers 00:07:47.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:47.398 Initialization complete. Launching workers. 00:07:47.398 ======================================================== 00:07:47.398 Latency(us) 00:07:47.398 Device Information : IOPS MiB/s Average min max 00:07:47.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17227.64 67.30 3715.20 621.71 20595.76 00:07:47.398 ======================================================== 00:07:47.398 Total : 17227.64 67.30 3715.20 621.71 20595.76 00:07:47.398 00:07:47.398 14:54:09 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:47.398 14:54:09 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:47.398 14:54:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:47.398 14:54:09 -- nvmf/common.sh@116 -- # sync 00:07:47.398 14:54:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:47.398 14:54:09 -- nvmf/common.sh@119 -- # set +e 00:07:47.398 14:54:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:47.398 14:54:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:47.398 rmmod nvme_tcp 00:07:47.398 rmmod nvme_fabrics 00:07:47.398 rmmod nvme_keyring 00:07:47.398 14:54:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:47.398 14:54:09 -- nvmf/common.sh@123 -- # set -e 00:07:47.398 14:54:09 -- nvmf/common.sh@124 -- # return 0 00:07:47.398 14:54:09 -- nvmf/common.sh@477 -- # '[' -n 72161 ']' 00:07:47.398 14:54:09 -- nvmf/common.sh@478 -- # killprocess 72161 00:07:47.398 14:54:09 -- common/autotest_common.sh@936 -- # '[' -z 72161 ']' 00:07:47.398 14:54:09 -- common/autotest_common.sh@940 -- # kill -0 72161 00:07:47.399 14:54:09 -- common/autotest_common.sh@941 -- # uname 00:07:47.399 14:54:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:47.399 14:54:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72161 00:07:47.399 killing process with pid 72161 00:07:47.399 14:54:09 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:47.399 14:54:09 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:47.399 14:54:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72161' 00:07:47.399 14:54:09 -- common/autotest_common.sh@955 -- # kill 72161 00:07:47.399 14:54:09 -- common/autotest_common.sh@960 -- # wait 72161 00:07:47.399 nvmf threads initialize successfully 00:07:47.399 bdev subsystem init successfully 00:07:47.399 created a nvmf target service 00:07:47.399 create targets's poll groups done 00:07:47.399 all subsystems of target started 00:07:47.399 nvmf target is running 00:07:47.399 all subsystems of target stopped 00:07:47.399 destroy targets's poll groups done 00:07:47.399 destroyed the nvmf target service 00:07:47.399 bdev subsystem finish successfully 00:07:47.399 nvmf threads destroy successfully 00:07:47.399 14:54:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:47.399 14:54:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:47.399 14:54:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:47.399 14:54:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.399 14:54:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:47.399 14:54:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.399 14:54:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.399 14:54:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.399 14:54:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:47.399 14:54:09 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:47.399 14:54:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.399 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:47.399 00:07:47.399 real 0m12.377s 00:07:47.399 user 0m44.188s 00:07:47.399 sys 0m2.003s 00:07:47.399 14:54:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.399 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:47.399 ************************************ 00:07:47.399 END TEST nvmf_example 00:07:47.399 ************************************ 00:07:47.399 14:54:09 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:47.399 14:54:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:47.399 14:54:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.399 14:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:47.399 ************************************ 00:07:47.399 START TEST nvmf_filesystem 00:07:47.399 ************************************ 00:07:47.399 14:54:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:47.399 * Looking for test storage... 00:07:47.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.399 14:54:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.399 14:54:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.399 14:54:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:47.399 14:54:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:47.399 14:54:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:47.399 14:54:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:47.399 14:54:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:47.399 14:54:09 -- scripts/common.sh@335 -- # IFS=.-: 00:07:47.399 14:54:09 -- scripts/common.sh@335 -- # read -ra ver1 00:07:47.399 14:54:09 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.399 14:54:09 -- scripts/common.sh@336 -- # read -ra ver2 00:07:47.399 14:54:09 -- scripts/common.sh@337 -- # local 'op=<' 00:07:47.399 14:54:09 -- scripts/common.sh@339 -- # ver1_l=2 00:07:47.399 14:54:09 -- scripts/common.sh@340 -- # ver2_l=1 00:07:47.399 14:54:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:47.399 14:54:09 -- scripts/common.sh@343 -- # case "$op" in 00:07:47.399 14:54:09 -- scripts/common.sh@344 -- # : 1 00:07:47.399 14:54:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:47.399 14:54:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.399 14:54:09 -- scripts/common.sh@364 -- # decimal 1 00:07:47.399 14:54:09 -- scripts/common.sh@352 -- # local d=1 00:07:47.399 14:54:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.399 14:54:09 -- scripts/common.sh@354 -- # echo 1 00:07:47.399 14:54:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:47.399 14:54:09 -- scripts/common.sh@365 -- # decimal 2 00:07:47.399 14:54:09 -- scripts/common.sh@352 -- # local d=2 00:07:47.399 14:54:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.399 14:54:09 -- scripts/common.sh@354 -- # echo 2 00:07:47.399 14:54:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:47.399 14:54:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:47.399 14:54:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:47.399 14:54:09 -- scripts/common.sh@367 -- # return 0 00:07:47.399 14:54:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.399 14:54:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:47.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.399 --rc genhtml_branch_coverage=1 00:07:47.399 --rc genhtml_function_coverage=1 00:07:47.399 --rc genhtml_legend=1 00:07:47.399 --rc geninfo_all_blocks=1 00:07:47.399 --rc geninfo_unexecuted_blocks=1 00:07:47.399 00:07:47.399 ' 00:07:47.399 14:54:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:47.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.399 --rc genhtml_branch_coverage=1 00:07:47.399 --rc genhtml_function_coverage=1 00:07:47.399 --rc genhtml_legend=1 00:07:47.399 --rc geninfo_all_blocks=1 00:07:47.399 --rc geninfo_unexecuted_blocks=1 00:07:47.399 00:07:47.399 ' 00:07:47.399 14:54:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:47.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.399 --rc genhtml_branch_coverage=1 00:07:47.399 --rc genhtml_function_coverage=1 00:07:47.399 --rc genhtml_legend=1 00:07:47.399 --rc geninfo_all_blocks=1 00:07:47.399 --rc geninfo_unexecuted_blocks=1 00:07:47.399 00:07:47.399 ' 00:07:47.399 14:54:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:47.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.399 --rc genhtml_branch_coverage=1 00:07:47.399 --rc genhtml_function_coverage=1 00:07:47.399 --rc genhtml_legend=1 00:07:47.399 --rc geninfo_all_blocks=1 00:07:47.399 --rc geninfo_unexecuted_blocks=1 00:07:47.399 00:07:47.399 ' 00:07:47.399 14:54:09 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:47.399 14:54:09 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:47.399 14:54:09 -- common/autotest_common.sh@34 -- # set -e 00:07:47.399 14:54:09 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:47.399 14:54:09 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:47.399 14:54:09 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:47.399 14:54:09 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:47.399 14:54:09 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:47.399 14:54:09 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:47.399 14:54:09 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:47.399 14:54:09 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:47.399 14:54:09 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:47.399 14:54:09 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:47.399 14:54:09 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:47.399 14:54:09 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:47.399 14:54:09 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:47.399 14:54:09 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:47.399 14:54:09 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:47.399 14:54:09 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:47.399 14:54:09 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:47.399 14:54:09 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:47.399 14:54:09 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:47.399 14:54:09 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:47.399 14:54:09 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:47.399 14:54:09 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:47.399 14:54:09 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:47.399 14:54:09 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:47.399 14:54:09 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:47.399 14:54:09 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:47.399 14:54:09 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:47.399 14:54:09 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:47.399 14:54:09 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:47.399 14:54:09 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:47.399 14:54:09 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:47.399 14:54:09 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:47.399 14:54:09 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:47.399 14:54:09 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:47.399 14:54:09 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:47.399 14:54:09 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:47.399 14:54:09 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:47.399 14:54:09 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:47.399 14:54:09 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:47.399 14:54:09 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:47.399 14:54:09 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:47.399 14:54:09 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:47.399 14:54:09 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:47.399 14:54:09 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:47.399 14:54:09 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:47.399 14:54:09 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:47.399 14:54:09 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:47.400 14:54:09 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:47.400 14:54:09 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:47.400 14:54:09 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:47.400 14:54:09 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:47.400 14:54:09 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:47.400 14:54:09 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:47.400 14:54:09 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:47.400 14:54:09 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:47.400 14:54:09 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:47.400 14:54:09 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:47.400 14:54:09 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:47.400 14:54:09 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:47.400 14:54:09 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:47.400 14:54:09 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:47.400 14:54:09 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:47.400 14:54:09 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:47.400 14:54:09 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:47.400 14:54:09 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:47.400 14:54:09 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:47.400 14:54:09 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:47.400 14:54:09 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:47.400 14:54:09 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:47.400 14:54:09 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:47.400 14:54:09 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:47.400 14:54:09 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:47.400 14:54:09 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:47.400 14:54:09 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:47.400 14:54:09 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:47.400 14:54:09 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:47.400 14:54:09 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:47.400 14:54:09 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:47.400 14:54:09 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:47.400 14:54:09 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:47.400 14:54:09 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:47.400 14:54:09 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:47.400 14:54:09 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:47.400 14:54:09 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:47.400 14:54:09 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:47.400 14:54:09 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:47.400 14:54:09 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:47.400 14:54:09 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:47.400 14:54:09 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:47.400 14:54:09 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:47.400 14:54:09 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:47.400 14:54:09 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:47.400 14:54:09 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:47.400 14:54:09 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:47.400 14:54:09 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:47.400 14:54:09 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:47.400 14:54:09 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:47.400 14:54:09 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:47.400 14:54:09 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:47.400 #define SPDK_CONFIG_H 00:07:47.400 #define SPDK_CONFIG_APPS 1 00:07:47.400 #define SPDK_CONFIG_ARCH native 00:07:47.400 #undef SPDK_CONFIG_ASAN 00:07:47.400 #define SPDK_CONFIG_AVAHI 1 00:07:47.400 #undef SPDK_CONFIG_CET 00:07:47.400 #define SPDK_CONFIG_COVERAGE 1 00:07:47.400 #define SPDK_CONFIG_CROSS_PREFIX 00:07:47.400 #undef SPDK_CONFIG_CRYPTO 00:07:47.400 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:47.400 #undef SPDK_CONFIG_CUSTOMOCF 00:07:47.400 #undef SPDK_CONFIG_DAOS 00:07:47.400 #define SPDK_CONFIG_DAOS_DIR 00:07:47.400 #define SPDK_CONFIG_DEBUG 1 00:07:47.400 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:47.400 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:47.400 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:47.400 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:47.400 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:47.400 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:47.400 #define SPDK_CONFIG_EXAMPLES 1 00:07:47.400 #undef SPDK_CONFIG_FC 00:07:47.400 #define SPDK_CONFIG_FC_PATH 00:07:47.400 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:47.400 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:47.400 #undef SPDK_CONFIG_FUSE 00:07:47.400 #undef SPDK_CONFIG_FUZZER 00:07:47.400 #define SPDK_CONFIG_FUZZER_LIB 00:07:47.400 #define SPDK_CONFIG_GOLANG 1 00:07:47.400 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:47.400 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:47.400 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:47.400 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:47.400 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:47.400 #define SPDK_CONFIG_IDXD 1 00:07:47.400 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:47.400 #undef SPDK_CONFIG_IPSEC_MB 00:07:47.400 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:47.400 #define SPDK_CONFIG_ISAL 1 00:07:47.400 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:47.400 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:47.400 #define SPDK_CONFIG_LIBDIR 00:07:47.400 #undef SPDK_CONFIG_LTO 00:07:47.400 #define SPDK_CONFIG_MAX_LCORES 00:07:47.400 #define SPDK_CONFIG_NVME_CUSE 1 00:07:47.400 #undef SPDK_CONFIG_OCF 00:07:47.400 #define SPDK_CONFIG_OCF_PATH 00:07:47.400 #define SPDK_CONFIG_OPENSSL_PATH 00:07:47.400 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:47.400 #undef SPDK_CONFIG_PGO_USE 00:07:47.400 #define SPDK_CONFIG_PREFIX /usr/local 00:07:47.400 #undef SPDK_CONFIG_RAID5F 00:07:47.400 #undef SPDK_CONFIG_RBD 00:07:47.400 #define SPDK_CONFIG_RDMA 1 00:07:47.400 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:47.400 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:47.400 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:47.400 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:47.400 #define SPDK_CONFIG_SHARED 1 00:07:47.400 #undef SPDK_CONFIG_SMA 00:07:47.400 #define SPDK_CONFIG_TESTS 1 00:07:47.400 #undef SPDK_CONFIG_TSAN 00:07:47.400 #define SPDK_CONFIG_UBLK 1 00:07:47.400 #define SPDK_CONFIG_UBSAN 1 00:07:47.400 #undef SPDK_CONFIG_UNIT_TESTS 00:07:47.400 #undef SPDK_CONFIG_URING 00:07:47.400 #define SPDK_CONFIG_URING_PATH 00:07:47.400 #undef SPDK_CONFIG_URING_ZNS 00:07:47.400 #define SPDK_CONFIG_USDT 1 00:07:47.400 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:47.400 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:47.400 #undef SPDK_CONFIG_VFIO_USER 00:07:47.400 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:47.400 #define SPDK_CONFIG_VHOST 1 00:07:47.400 #define SPDK_CONFIG_VIRTIO 1 00:07:47.400 #undef SPDK_CONFIG_VTUNE 00:07:47.400 #define SPDK_CONFIG_VTUNE_DIR 00:07:47.400 #define SPDK_CONFIG_WERROR 1 00:07:47.400 #define SPDK_CONFIG_WPDK_DIR 00:07:47.400 #undef SPDK_CONFIG_XNVME 00:07:47.400 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:47.400 14:54:09 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:47.400 14:54:09 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.400 14:54:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.400 14:54:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.400 14:54:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.400 14:54:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.400 14:54:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.400 14:54:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.400 14:54:09 -- paths/export.sh@5 -- # export PATH 00:07:47.400 14:54:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.400 14:54:09 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:47.400 14:54:09 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:47.400 14:54:09 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:47.400 14:54:09 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:47.400 14:54:09 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:47.401 14:54:09 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:47.401 14:54:09 -- pm/common@16 -- # TEST_TAG=N/A 00:07:47.401 14:54:09 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:47.401 14:54:09 -- common/autotest_common.sh@52 -- # : 1 00:07:47.401 14:54:09 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:47.401 14:54:09 -- common/autotest_common.sh@56 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:47.401 14:54:09 -- common/autotest_common.sh@58 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:47.401 14:54:09 -- common/autotest_common.sh@60 -- # : 1 00:07:47.401 14:54:09 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:47.401 14:54:09 -- common/autotest_common.sh@62 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:47.401 14:54:09 -- common/autotest_common.sh@64 -- # : 00:07:47.401 14:54:09 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:47.401 14:54:09 -- common/autotest_common.sh@66 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:47.401 14:54:09 -- common/autotest_common.sh@68 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:47.401 14:54:09 -- common/autotest_common.sh@70 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:47.401 14:54:09 -- common/autotest_common.sh@72 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:47.401 14:54:09 -- common/autotest_common.sh@74 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:47.401 14:54:09 -- common/autotest_common.sh@76 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:47.401 14:54:09 -- common/autotest_common.sh@78 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:47.401 14:54:09 -- common/autotest_common.sh@80 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:47.401 14:54:09 -- common/autotest_common.sh@82 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:47.401 14:54:09 -- common/autotest_common.sh@84 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:47.401 14:54:09 -- common/autotest_common.sh@86 -- # : 1 00:07:47.401 14:54:09 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:47.401 14:54:09 -- common/autotest_common.sh@88 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:47.401 14:54:09 -- common/autotest_common.sh@90 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:47.401 14:54:09 -- common/autotest_common.sh@92 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:47.401 14:54:09 -- common/autotest_common.sh@94 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:47.401 14:54:09 -- common/autotest_common.sh@96 -- # : tcp 00:07:47.401 14:54:09 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:47.401 14:54:09 -- common/autotest_common.sh@98 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:47.401 14:54:09 -- common/autotest_common.sh@100 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:47.401 14:54:09 -- common/autotest_common.sh@102 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:47.401 14:54:09 -- common/autotest_common.sh@104 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:47.401 14:54:09 -- common/autotest_common.sh@106 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:47.401 14:54:09 -- common/autotest_common.sh@108 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:47.401 14:54:09 -- common/autotest_common.sh@110 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:47.401 14:54:09 -- common/autotest_common.sh@112 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:47.401 14:54:09 -- common/autotest_common.sh@114 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:47.401 14:54:09 -- common/autotest_common.sh@116 -- # : 1 00:07:47.401 14:54:09 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:47.401 14:54:09 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:47.401 14:54:09 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:47.401 14:54:09 -- common/autotest_common.sh@120 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:47.401 14:54:09 -- common/autotest_common.sh@122 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:47.401 14:54:09 -- common/autotest_common.sh@124 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:47.401 14:54:09 -- common/autotest_common.sh@126 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:47.401 14:54:09 -- common/autotest_common.sh@128 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:47.401 14:54:09 -- common/autotest_common.sh@130 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:47.401 14:54:09 -- common/autotest_common.sh@132 -- # : v23.11 00:07:47.401 14:54:09 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:47.401 14:54:09 -- common/autotest_common.sh@134 -- # : true 00:07:47.401 14:54:09 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:47.401 14:54:09 -- common/autotest_common.sh@136 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:47.401 14:54:09 -- common/autotest_common.sh@138 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:47.401 14:54:09 -- common/autotest_common.sh@140 -- # : 1 00:07:47.401 14:54:09 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:47.401 14:54:09 -- common/autotest_common.sh@142 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:47.401 14:54:09 -- common/autotest_common.sh@144 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:47.401 14:54:09 -- common/autotest_common.sh@146 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:47.401 14:54:09 -- common/autotest_common.sh@148 -- # : 00:07:47.401 14:54:09 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:47.401 14:54:09 -- common/autotest_common.sh@150 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:47.401 14:54:09 -- common/autotest_common.sh@152 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:47.401 14:54:09 -- common/autotest_common.sh@154 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:47.401 14:54:09 -- common/autotest_common.sh@156 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:47.401 14:54:09 -- common/autotest_common.sh@158 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:47.401 14:54:09 -- common/autotest_common.sh@160 -- # : 0 00:07:47.401 14:54:09 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:47.401 14:54:09 -- common/autotest_common.sh@163 -- # : 00:07:47.401 14:54:09 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:47.401 14:54:09 -- common/autotest_common.sh@165 -- # : 1 00:07:47.401 14:54:09 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:47.401 14:54:09 -- common/autotest_common.sh@167 -- # : 1 00:07:47.401 14:54:09 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:47.401 14:54:09 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:47.401 14:54:09 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:47.401 14:54:09 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:47.401 14:54:09 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:47.401 14:54:09 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:47.401 14:54:09 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:47.401 14:54:09 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:47.401 14:54:09 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:47.401 14:54:09 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:47.401 14:54:09 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:47.401 14:54:09 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:47.401 14:54:09 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:47.402 14:54:09 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:47.402 14:54:09 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:47.402 14:54:09 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:47.402 14:54:09 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:47.402 14:54:09 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:47.402 14:54:09 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:47.402 14:54:09 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:47.402 14:54:09 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:47.402 14:54:09 -- common/autotest_common.sh@196 -- # cat 00:07:47.402 14:54:09 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:47.402 14:54:09 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:47.402 14:54:09 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:47.402 14:54:09 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:47.402 14:54:09 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:47.402 14:54:09 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:47.402 14:54:09 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:47.402 14:54:09 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:47.402 14:54:09 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:47.402 14:54:09 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:47.402 14:54:09 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:47.402 14:54:09 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:47.402 14:54:09 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:47.402 14:54:09 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:47.402 14:54:09 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:47.402 14:54:09 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:47.402 14:54:09 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:47.402 14:54:09 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:47.402 14:54:09 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:47.402 14:54:09 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:47.402 14:54:09 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:47.402 14:54:09 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:47.402 14:54:09 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:47.402 14:54:09 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:47.402 14:54:09 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:47.402 14:54:09 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:47.402 14:54:09 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:47.402 14:54:09 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:47.402 14:54:09 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:47.402 14:54:09 -- common/autotest_common.sh@259 -- # valgrind= 00:07:47.402 14:54:09 -- common/autotest_common.sh@265 -- # uname -s 00:07:47.402 14:54:09 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:47.402 14:54:09 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:47.402 14:54:09 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:47.402 14:54:09 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:47.402 14:54:09 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:47.402 14:54:09 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:47.402 14:54:09 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:47.402 14:54:09 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:07:47.402 14:54:09 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:47.402 14:54:09 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:47.402 14:54:09 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:47.402 14:54:09 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:47.402 14:54:09 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:47.402 14:54:09 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:47.402 14:54:09 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:47.402 14:54:09 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:47.402 14:54:09 -- common/autotest_common.sh@319 -- # [[ -z 72396 ]] 00:07:47.402 14:54:09 -- common/autotest_common.sh@319 -- # kill -0 72396 00:07:47.402 14:54:09 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:47.402 14:54:09 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:47.402 14:54:09 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:47.402 14:54:09 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:47.402 14:54:09 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:47.402 14:54:09 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:47.402 14:54:09 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:47.402 14:54:09 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:47.402 14:54:09 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.NfaTcR 00:07:47.402 14:54:09 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:47.402 14:54:09 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:47.402 14:54:09 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:47.402 14:54:09 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.NfaTcR/tests/target /tmp/spdk.NfaTcR 00:07:47.402 14:54:09 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@328 -- # df -T 00:07:47.402 14:54:09 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293789184 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289367040 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265171968 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293789184 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289367040 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266290176 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253273600 00:07:47.402 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253285888 00:07:47.402 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:47.402 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.402 14:54:09 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:47.403 14:54:09 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:07:47.403 14:54:09 -- common/autotest_common.sh@363 -- # avails["$mount"]=98360598528 00:07:47.403 14:54:09 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:07:47.403 14:54:09 -- common/autotest_common.sh@364 -- # uses["$mount"]=1342181376 00:07:47.403 14:54:09 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:47.403 14:54:09 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:47.403 * Looking for test storage... 00:07:47.403 14:54:09 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:47.403 14:54:09 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:47.403 14:54:09 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.403 14:54:09 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:47.403 14:54:09 -- common/autotest_common.sh@373 -- # mount=/home 00:07:47.403 14:54:09 -- common/autotest_common.sh@375 -- # target_space=13293789184 00:07:47.403 14:54:09 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:47.403 14:54:09 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:47.403 14:54:09 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:07:47.403 14:54:09 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:07:47.403 14:54:09 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:07:47.403 14:54:09 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.403 14:54:09 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.403 14:54:09 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.403 14:54:09 -- common/autotest_common.sh@390 -- # return 0 00:07:47.403 14:54:09 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:47.403 14:54:09 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:47.403 14:54:09 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:47.403 14:54:09 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:47.403 14:54:09 -- common/autotest_common.sh@1682 -- # true 00:07:47.403 14:54:09 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:47.403 14:54:09 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:47.403 14:54:09 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:47.403 14:54:09 -- common/autotest_common.sh@27 -- # exec 00:07:47.403 14:54:09 -- common/autotest_common.sh@29 -- # exec 00:07:47.403 14:54:09 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:47.403 14:54:09 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:47.403 14:54:09 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:47.403 14:54:09 -- common/autotest_common.sh@18 -- # set -x 00:07:47.403 14:54:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:47.403 14:54:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:47.403 14:54:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:47.403 14:54:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:47.403 14:54:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:47.403 14:54:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:47.403 14:54:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:47.403 14:54:09 -- scripts/common.sh@335 -- # IFS=.-: 00:07:47.403 14:54:09 -- scripts/common.sh@335 -- # read -ra ver1 00:07:47.403 14:54:09 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.403 14:54:09 -- scripts/common.sh@336 -- # read -ra ver2 00:07:47.403 14:54:09 -- scripts/common.sh@337 -- # local 'op=<' 00:07:47.403 14:54:09 -- scripts/common.sh@339 -- # ver1_l=2 00:07:47.403 14:54:09 -- scripts/common.sh@340 -- # ver2_l=1 00:07:47.403 14:54:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:47.403 14:54:09 -- scripts/common.sh@343 -- # case "$op" in 00:07:47.403 14:54:09 -- scripts/common.sh@344 -- # : 1 00:07:47.403 14:54:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:47.403 14:54:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.403 14:54:09 -- scripts/common.sh@364 -- # decimal 1 00:07:47.403 14:54:09 -- scripts/common.sh@352 -- # local d=1 00:07:47.403 14:54:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.403 14:54:09 -- scripts/common.sh@354 -- # echo 1 00:07:47.403 14:54:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:47.403 14:54:09 -- scripts/common.sh@365 -- # decimal 2 00:07:47.403 14:54:09 -- scripts/common.sh@352 -- # local d=2 00:07:47.403 14:54:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.403 14:54:09 -- scripts/common.sh@354 -- # echo 2 00:07:47.403 14:54:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:47.403 14:54:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:47.403 14:54:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:47.403 14:54:09 -- scripts/common.sh@367 -- # return 0 00:07:47.403 14:54:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.403 14:54:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:47.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.403 --rc genhtml_branch_coverage=1 00:07:47.403 --rc genhtml_function_coverage=1 00:07:47.403 --rc genhtml_legend=1 00:07:47.403 --rc geninfo_all_blocks=1 00:07:47.403 --rc geninfo_unexecuted_blocks=1 00:07:47.403 00:07:47.403 ' 00:07:47.403 14:54:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:47.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.403 --rc genhtml_branch_coverage=1 00:07:47.403 --rc genhtml_function_coverage=1 00:07:47.403 --rc genhtml_legend=1 00:07:47.403 --rc geninfo_all_blocks=1 00:07:47.403 --rc geninfo_unexecuted_blocks=1 00:07:47.403 00:07:47.403 ' 00:07:47.403 14:54:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:47.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.403 --rc genhtml_branch_coverage=1 00:07:47.403 --rc genhtml_function_coverage=1 00:07:47.403 --rc genhtml_legend=1 00:07:47.403 --rc geninfo_all_blocks=1 00:07:47.403 --rc geninfo_unexecuted_blocks=1 00:07:47.403 00:07:47.403 ' 00:07:47.403 14:54:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:47.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.403 --rc genhtml_branch_coverage=1 00:07:47.403 --rc genhtml_function_coverage=1 00:07:47.403 --rc genhtml_legend=1 00:07:47.403 --rc geninfo_all_blocks=1 00:07:47.403 --rc geninfo_unexecuted_blocks=1 00:07:47.403 00:07:47.403 ' 00:07:47.403 14:54:09 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.403 14:54:09 -- nvmf/common.sh@7 -- # uname -s 00:07:47.403 14:54:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.403 14:54:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.403 14:54:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.403 14:54:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.403 14:54:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.403 14:54:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.403 14:54:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.403 14:54:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.403 14:54:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.403 14:54:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.403 14:54:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:07:47.403 14:54:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:07:47.403 14:54:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.403 14:54:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.403 14:54:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.403 14:54:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.403 14:54:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.403 14:54:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.403 14:54:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.403 14:54:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.404 14:54:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.404 14:54:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.404 14:54:09 -- paths/export.sh@5 -- # export PATH 00:07:47.404 14:54:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.404 14:54:09 -- nvmf/common.sh@46 -- # : 0 00:07:47.404 14:54:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:47.404 14:54:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:47.404 14:54:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:47.404 14:54:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.404 14:54:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.404 14:54:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:47.404 14:54:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:47.404 14:54:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:47.404 14:54:09 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:47.404 14:54:09 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:47.404 14:54:09 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:47.404 14:54:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:47.404 14:54:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.404 14:54:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:47.404 14:54:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:47.404 14:54:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:47.404 14:54:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.404 14:54:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.404 14:54:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.404 14:54:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:47.404 14:54:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:47.404 14:54:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:47.404 14:54:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:47.404 14:54:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:47.404 14:54:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:47.404 14:54:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.404 14:54:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.404 14:54:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:47.404 14:54:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:47.404 14:54:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.404 14:54:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.404 14:54:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.404 14:54:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.404 14:54:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.404 14:54:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.404 14:54:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.404 14:54:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.404 14:54:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:47.404 14:54:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:47.404 Cannot find device "nvmf_tgt_br" 00:07:47.404 14:54:09 -- nvmf/common.sh@154 -- # true 00:07:47.404 14:54:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.404 Cannot find device "nvmf_tgt_br2" 00:07:47.404 14:54:09 -- nvmf/common.sh@155 -- # true 00:07:47.404 14:54:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:47.404 14:54:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:47.404 Cannot find device "nvmf_tgt_br" 00:07:47.404 14:54:09 -- nvmf/common.sh@157 -- # true 00:07:47.404 14:54:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:47.404 Cannot find device "nvmf_tgt_br2" 00:07:47.404 14:54:09 -- nvmf/common.sh@158 -- # true 00:07:47.404 14:54:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:47.404 14:54:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:47.404 14:54:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.404 14:54:09 -- nvmf/common.sh@161 -- # true 00:07:47.404 14:54:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.404 14:54:09 -- nvmf/common.sh@162 -- # true 00:07:47.404 14:54:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.404 14:54:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.404 14:54:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.404 14:54:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.404 14:54:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.404 14:54:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.404 14:54:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.404 14:54:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:47.404 14:54:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:47.404 14:54:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:47.404 14:54:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:47.404 14:54:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:47.404 14:54:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:47.404 14:54:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.404 14:54:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.404 14:54:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.404 14:54:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:47.404 14:54:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:47.404 14:54:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:47.404 14:54:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.404 14:54:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.404 14:54:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.404 14:54:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.404 14:54:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:47.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:07:47.404 00:07:47.404 --- 10.0.0.2 ping statistics --- 00:07:47.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.404 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:47.404 14:54:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:47.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:07:47.404 00:07:47.404 --- 10.0.0.3 ping statistics --- 00:07:47.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.404 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:47.404 14:54:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:07:47.404 00:07:47.404 --- 10.0.0.1 ping statistics --- 00:07:47.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.404 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:47.404 14:54:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.404 14:54:10 -- nvmf/common.sh@421 -- # return 0 00:07:47.404 14:54:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:47.404 14:54:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.404 14:54:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:47.404 14:54:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:47.404 14:54:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.404 14:54:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:47.404 14:54:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:47.404 14:54:10 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:47.404 14:54:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:47.404 14:54:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.404 14:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:47.404 ************************************ 00:07:47.404 START TEST nvmf_filesystem_no_in_capsule 00:07:47.404 ************************************ 00:07:47.404 14:54:10 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:47.404 14:54:10 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:47.404 14:54:10 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:47.404 14:54:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:47.404 14:54:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.404 14:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:47.404 14:54:10 -- nvmf/common.sh@469 -- # nvmfpid=72575 00:07:47.404 14:54:10 -- nvmf/common.sh@470 -- # waitforlisten 72575 00:07:47.404 14:54:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.404 14:54:10 -- common/autotest_common.sh@829 -- # '[' -z 72575 ']' 00:07:47.404 14:54:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.404 14:54:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.404 14:54:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.405 14:54:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.405 14:54:10 -- common/autotest_common.sh@10 -- # set +x 00:07:47.405 [2024-11-18 14:54:10.275509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.405 [2024-11-18 14:54:10.275596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.405 [2024-11-18 14:54:10.415476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.405 [2024-11-18 14:54:10.474806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:47.405 [2024-11-18 14:54:10.474933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.405 [2024-11-18 14:54:10.474944] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.405 [2024-11-18 14:54:10.474952] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.405 [2024-11-18 14:54:10.475021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.405 [2024-11-18 14:54:10.475898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.405 [2024-11-18 14:54:10.476014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.405 [2024-11-18 14:54:10.476032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.971 14:54:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.971 14:54:11 -- common/autotest_common.sh@862 -- # return 0 00:07:47.971 14:54:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:47.971 14:54:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.971 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:47.971 14:54:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.971 14:54:11 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:47.971 14:54:11 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:47.971 14:54:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.971 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:47.971 [2024-11-18 14:54:11.357677] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.971 14:54:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.971 14:54:11 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:47.971 14:54:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.971 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:47.971 Malloc1 00:07:47.971 14:54:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.971 14:54:11 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:47.971 14:54:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.971 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:47.971 14:54:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.971 14:54:11 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:47.971 14:54:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.971 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:47.971 14:54:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.971 14:54:11 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.971 14:54:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.971 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:47.971 [2024-11-18 14:54:11.545972] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.971 14:54:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.971 14:54:11 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:47.971 14:54:11 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:47.971 14:54:11 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:47.971 14:54:11 -- common/autotest_common.sh@1369 -- # local bs 00:07:47.971 14:54:11 -- common/autotest_common.sh@1370 -- # local nb 00:07:47.971 14:54:11 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:47.971 14:54:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.971 14:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:48.229 14:54:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.229 14:54:11 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:48.229 { 00:07:48.229 "aliases": [ 00:07:48.229 "493c75c6-0196-42b0-a6d6-5cf8a46d80c8" 00:07:48.229 ], 00:07:48.229 "assigned_rate_limits": { 00:07:48.229 "r_mbytes_per_sec": 0, 00:07:48.229 "rw_ios_per_sec": 0, 00:07:48.229 "rw_mbytes_per_sec": 0, 00:07:48.229 "w_mbytes_per_sec": 0 00:07:48.229 }, 00:07:48.229 "block_size": 512, 00:07:48.229 "claim_type": "exclusive_write", 00:07:48.229 "claimed": true, 00:07:48.229 "driver_specific": {}, 00:07:48.229 "memory_domains": [ 00:07:48.229 { 00:07:48.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.229 "dma_device_type": 2 00:07:48.229 } 00:07:48.229 ], 00:07:48.229 "name": "Malloc1", 00:07:48.229 "num_blocks": 1048576, 00:07:48.229 "product_name": "Malloc disk", 00:07:48.229 "supported_io_types": { 00:07:48.229 "abort": true, 00:07:48.229 "compare": false, 00:07:48.229 "compare_and_write": false, 00:07:48.229 "flush": true, 00:07:48.229 "nvme_admin": false, 00:07:48.229 "nvme_io": false, 00:07:48.229 "read": true, 00:07:48.229 "reset": true, 00:07:48.229 "unmap": true, 00:07:48.229 "write": true, 00:07:48.229 "write_zeroes": true 00:07:48.229 }, 00:07:48.229 "uuid": "493c75c6-0196-42b0-a6d6-5cf8a46d80c8", 00:07:48.229 "zoned": false 00:07:48.229 } 00:07:48.229 ]' 00:07:48.229 14:54:11 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:48.229 14:54:11 -- common/autotest_common.sh@1372 -- # bs=512 00:07:48.229 14:54:11 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:48.229 14:54:11 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:48.229 14:54:11 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:48.229 14:54:11 -- common/autotest_common.sh@1377 -- # echo 512 00:07:48.229 14:54:11 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:48.229 14:54:11 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.488 14:54:11 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:48.488 14:54:11 -- common/autotest_common.sh@1187 -- # local i=0 00:07:48.488 14:54:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:48.488 14:54:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:48.488 14:54:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:50.387 14:54:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:50.387 14:54:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:50.387 14:54:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:50.387 14:54:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:50.387 14:54:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:50.387 14:54:13 -- common/autotest_common.sh@1197 -- # return 0 00:07:50.387 14:54:13 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:50.387 14:54:13 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:50.387 14:54:13 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:50.387 14:54:13 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:50.387 14:54:13 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:50.387 14:54:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:50.387 14:54:13 -- setup/common.sh@80 -- # echo 536870912 00:07:50.387 14:54:13 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:50.387 14:54:13 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:50.388 14:54:13 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:50.388 14:54:13 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:50.388 14:54:13 -- target/filesystem.sh@69 -- # partprobe 00:07:50.645 14:54:14 -- target/filesystem.sh@70 -- # sleep 1 00:07:51.580 14:54:15 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:51.580 14:54:15 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:51.580 14:54:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:51.580 14:54:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.580 14:54:15 -- common/autotest_common.sh@10 -- # set +x 00:07:51.580 ************************************ 00:07:51.580 START TEST filesystem_ext4 00:07:51.580 ************************************ 00:07:51.580 14:54:15 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:51.580 14:54:15 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:51.580 14:54:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.580 14:54:15 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:51.580 14:54:15 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:51.580 14:54:15 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:51.580 14:54:15 -- common/autotest_common.sh@914 -- # local i=0 00:07:51.580 14:54:15 -- common/autotest_common.sh@915 -- # local force 00:07:51.580 14:54:15 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:51.580 14:54:15 -- common/autotest_common.sh@918 -- # force=-F 00:07:51.580 14:54:15 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:51.580 mke2fs 1.47.0 (5-Feb-2023) 00:07:51.580 Discarding device blocks: 0/522240 done 00:07:51.580 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:51.580 Filesystem UUID: e94df5b5-abe2-4d5b-8a1e-e73f79d0cd0c 00:07:51.580 Superblock backups stored on blocks: 00:07:51.580 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:51.580 00:07:51.580 Allocating group tables: 0/64 done 00:07:51.580 Writing inode tables: 0/64 done 00:07:51.837 Creating journal (8192 blocks): done 00:07:51.837 Writing superblocks and filesystem accounting information: 0/64 done 00:07:51.837 00:07:51.838 14:54:15 -- common/autotest_common.sh@931 -- # return 0 00:07:51.838 14:54:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.099 14:54:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.099 14:54:20 -- target/filesystem.sh@25 -- # sync 00:07:57.099 14:54:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.100 14:54:20 -- target/filesystem.sh@27 -- # sync 00:07:57.100 14:54:20 -- target/filesystem.sh@29 -- # i=0 00:07:57.100 14:54:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.100 14:54:20 -- target/filesystem.sh@37 -- # kill -0 72575 00:07:57.100 14:54:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.100 14:54:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.100 14:54:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.100 14:54:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.100 00:07:57.100 real 0m5.621s 00:07:57.100 user 0m0.029s 00:07:57.100 sys 0m0.058s 00:07:57.100 14:54:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.100 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:57.100 ************************************ 00:07:57.100 END TEST filesystem_ext4 00:07:57.100 ************************************ 00:07:57.358 14:54:20 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:57.358 14:54:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:57.358 14:54:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.358 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:57.358 ************************************ 00:07:57.358 START TEST filesystem_btrfs 00:07:57.358 ************************************ 00:07:57.358 14:54:20 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:57.358 14:54:20 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:57.358 14:54:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.358 14:54:20 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:57.358 14:54:20 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:57.358 14:54:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:57.358 14:54:20 -- common/autotest_common.sh@914 -- # local i=0 00:07:57.358 14:54:20 -- common/autotest_common.sh@915 -- # local force 00:07:57.358 14:54:20 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:57.358 14:54:20 -- common/autotest_common.sh@920 -- # force=-f 00:07:57.358 14:54:20 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:57.358 btrfs-progs v6.8.1 00:07:57.358 See https://btrfs.readthedocs.io for more information. 00:07:57.358 00:07:57.358 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:57.358 NOTE: several default settings have changed in version 5.15, please make sure 00:07:57.358 this does not affect your deployments: 00:07:57.358 - DUP for metadata (-m dup) 00:07:57.358 - enabled no-holes (-O no-holes) 00:07:57.358 - enabled free-space-tree (-R free-space-tree) 00:07:57.358 00:07:57.358 Label: (null) 00:07:57.358 UUID: 79db67cb-f693-453e-b64f-312df5a219b2 00:07:57.358 Node size: 16384 00:07:57.358 Sector size: 4096 (CPU page size: 4096) 00:07:57.358 Filesystem size: 510.00MiB 00:07:57.358 Block group profiles: 00:07:57.358 Data: single 8.00MiB 00:07:57.358 Metadata: DUP 32.00MiB 00:07:57.358 System: DUP 8.00MiB 00:07:57.358 SSD detected: yes 00:07:57.358 Zoned device: no 00:07:57.358 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:57.358 Checksum: crc32c 00:07:57.358 Number of devices: 1 00:07:57.358 Devices: 00:07:57.358 ID SIZE PATH 00:07:57.358 1 510.00MiB /dev/nvme0n1p1 00:07:57.358 00:07:57.358 14:54:20 -- common/autotest_common.sh@931 -- # return 0 00:07:57.358 14:54:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.358 14:54:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.358 14:54:20 -- target/filesystem.sh@25 -- # sync 00:07:57.358 14:54:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.358 14:54:20 -- target/filesystem.sh@27 -- # sync 00:07:57.358 14:54:20 -- target/filesystem.sh@29 -- # i=0 00:07:57.358 14:54:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.616 14:54:20 -- target/filesystem.sh@37 -- # kill -0 72575 00:07:57.616 14:54:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.616 14:54:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.616 14:54:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.616 14:54:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.616 00:07:57.616 real 0m0.230s 00:07:57.616 user 0m0.020s 00:07:57.616 sys 0m0.067s 00:07:57.616 14:54:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.616 14:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:57.616 ************************************ 00:07:57.616 END TEST filesystem_btrfs 00:07:57.616 ************************************ 00:07:57.616 14:54:21 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:57.616 14:54:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:57.616 14:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.616 14:54:21 -- common/autotest_common.sh@10 -- # set +x 00:07:57.616 ************************************ 00:07:57.616 START TEST filesystem_xfs 00:07:57.616 ************************************ 00:07:57.616 14:54:21 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:57.616 14:54:21 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:57.616 14:54:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.616 14:54:21 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:57.616 14:54:21 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:57.616 14:54:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:57.616 14:54:21 -- common/autotest_common.sh@914 -- # local i=0 00:07:57.616 14:54:21 -- common/autotest_common.sh@915 -- # local force 00:07:57.616 14:54:21 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:57.616 14:54:21 -- common/autotest_common.sh@920 -- # force=-f 00:07:57.616 14:54:21 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:57.616 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:57.616 = sectsz=512 attr=2, projid32bit=1 00:07:57.616 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:57.616 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:57.616 data = bsize=4096 blocks=130560, imaxpct=25 00:07:57.616 = sunit=0 swidth=0 blks 00:07:57.616 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:57.616 log =internal log bsize=4096 blocks=16384, version=2 00:07:57.616 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:57.616 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:58.550 Discarding blocks...Done. 00:07:58.550 14:54:21 -- common/autotest_common.sh@931 -- # return 0 00:07:58.550 14:54:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.082 14:54:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:01.082 14:54:24 -- target/filesystem.sh@25 -- # sync 00:08:01.082 14:54:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:01.082 14:54:24 -- target/filesystem.sh@27 -- # sync 00:08:01.082 14:54:24 -- target/filesystem.sh@29 -- # i=0 00:08:01.082 14:54:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:01.082 14:54:24 -- target/filesystem.sh@37 -- # kill -0 72575 00:08:01.082 14:54:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:01.082 14:54:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:01.082 14:54:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:01.082 14:54:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:01.082 00:08:01.082 real 0m3.211s 00:08:01.082 user 0m0.021s 00:08:01.082 sys 0m0.057s 00:08:01.082 14:54:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.082 14:54:24 -- common/autotest_common.sh@10 -- # set +x 00:08:01.082 ************************************ 00:08:01.082 END TEST filesystem_xfs 00:08:01.082 ************************************ 00:08:01.082 14:54:24 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:01.082 14:54:24 -- target/filesystem.sh@93 -- # sync 00:08:01.082 14:54:24 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:01.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.082 14:54:24 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:01.082 14:54:24 -- common/autotest_common.sh@1208 -- # local i=0 00:08:01.082 14:54:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:01.082 14:54:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.082 14:54:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:01.082 14:54:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:01.082 14:54:24 -- common/autotest_common.sh@1220 -- # return 0 00:08:01.082 14:54:24 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.082 14:54:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.082 14:54:24 -- common/autotest_common.sh@10 -- # set +x 00:08:01.082 14:54:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.082 14:54:24 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:01.082 14:54:24 -- target/filesystem.sh@101 -- # killprocess 72575 00:08:01.082 14:54:24 -- common/autotest_common.sh@936 -- # '[' -z 72575 ']' 00:08:01.082 14:54:24 -- common/autotest_common.sh@940 -- # kill -0 72575 00:08:01.082 14:54:24 -- common/autotest_common.sh@941 -- # uname 00:08:01.082 14:54:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:01.082 14:54:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72575 00:08:01.082 14:54:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:01.082 14:54:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:01.082 killing process with pid 72575 00:08:01.082 14:54:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72575' 00:08:01.082 14:54:24 -- common/autotest_common.sh@955 -- # kill 72575 00:08:01.082 14:54:24 -- common/autotest_common.sh@960 -- # wait 72575 00:08:01.649 14:54:24 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:01.649 00:08:01.649 real 0m14.756s 00:08:01.649 user 0m57.064s 00:08:01.649 sys 0m1.577s 00:08:01.649 14:54:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.649 14:54:24 -- common/autotest_common.sh@10 -- # set +x 00:08:01.649 ************************************ 00:08:01.649 END TEST nvmf_filesystem_no_in_capsule 00:08:01.649 ************************************ 00:08:01.649 14:54:25 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:01.649 14:54:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:01.649 14:54:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.649 14:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:01.649 ************************************ 00:08:01.649 START TEST nvmf_filesystem_in_capsule 00:08:01.649 ************************************ 00:08:01.649 14:54:25 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:01.649 14:54:25 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:01.649 14:54:25 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:01.649 14:54:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:01.649 14:54:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.649 14:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:01.649 14:54:25 -- nvmf/common.sh@469 -- # nvmfpid=72947 00:08:01.649 14:54:25 -- nvmf/common.sh@470 -- # waitforlisten 72947 00:08:01.649 14:54:25 -- common/autotest_common.sh@829 -- # '[' -z 72947 ']' 00:08:01.649 14:54:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.649 14:54:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.649 14:54:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.649 14:54:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.649 14:54:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.649 14:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:01.649 [2024-11-18 14:54:25.076386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.649 [2024-11-18 14:54:25.076465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.649 [2024-11-18 14:54:25.213089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.908 [2024-11-18 14:54:25.285128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.908 [2024-11-18 14:54:25.285348] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.908 [2024-11-18 14:54:25.285371] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.908 [2024-11-18 14:54:25.285384] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.908 [2024-11-18 14:54:25.285556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.908 [2024-11-18 14:54:25.285936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.908 [2024-11-18 14:54:25.286156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.908 [2024-11-18 14:54:25.286167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.843 14:54:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.843 14:54:26 -- common/autotest_common.sh@862 -- # return 0 00:08:02.843 14:54:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:02.843 14:54:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.843 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.843 14:54:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.843 14:54:26 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:02.843 14:54:26 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:02.843 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.843 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.843 [2024-11-18 14:54:26.140927] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.843 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.843 14:54:26 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:02.843 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.843 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.843 Malloc1 00:08:02.843 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.843 14:54:26 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:02.843 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.843 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.843 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.843 14:54:26 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.843 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.843 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.843 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.843 14:54:26 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.843 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.843 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 [2024-11-18 14:54:26.369152] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.844 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.844 14:54:26 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:02.844 14:54:26 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:02.844 14:54:26 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:02.844 14:54:26 -- common/autotest_common.sh@1369 -- # local bs 00:08:02.844 14:54:26 -- common/autotest_common.sh@1370 -- # local nb 00:08:02.844 14:54:26 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:02.844 14:54:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.844 14:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:02.844 14:54:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.844 14:54:26 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:02.844 { 00:08:02.844 "aliases": [ 00:08:02.844 "456d13c4-d0f1-4be9-8af3-45adbd32a8f3" 00:08:02.844 ], 00:08:02.844 "assigned_rate_limits": { 00:08:02.844 "r_mbytes_per_sec": 0, 00:08:02.844 "rw_ios_per_sec": 0, 00:08:02.844 "rw_mbytes_per_sec": 0, 00:08:02.844 "w_mbytes_per_sec": 0 00:08:02.844 }, 00:08:02.844 "block_size": 512, 00:08:02.844 "claim_type": "exclusive_write", 00:08:02.844 "claimed": true, 00:08:02.844 "driver_specific": {}, 00:08:02.844 "memory_domains": [ 00:08:02.844 { 00:08:02.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.844 "dma_device_type": 2 00:08:02.844 } 00:08:02.844 ], 00:08:02.844 "name": "Malloc1", 00:08:02.844 "num_blocks": 1048576, 00:08:02.844 "product_name": "Malloc disk", 00:08:02.844 "supported_io_types": { 00:08:02.844 "abort": true, 00:08:02.844 "compare": false, 00:08:02.844 "compare_and_write": false, 00:08:02.844 "flush": true, 00:08:02.844 "nvme_admin": false, 00:08:02.844 "nvme_io": false, 00:08:02.844 "read": true, 00:08:02.844 "reset": true, 00:08:02.844 "unmap": true, 00:08:02.844 "write": true, 00:08:02.844 "write_zeroes": true 00:08:02.844 }, 00:08:02.844 "uuid": "456d13c4-d0f1-4be9-8af3-45adbd32a8f3", 00:08:02.844 "zoned": false 00:08:02.844 } 00:08:02.844 ]' 00:08:02.844 14:54:26 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:03.102 14:54:26 -- common/autotest_common.sh@1372 -- # bs=512 00:08:03.102 14:54:26 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:03.102 14:54:26 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:03.102 14:54:26 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:03.102 14:54:26 -- common/autotest_common.sh@1377 -- # echo 512 00:08:03.102 14:54:26 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:03.102 14:54:26 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.102 14:54:26 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.102 14:54:26 -- common/autotest_common.sh@1187 -- # local i=0 00:08:03.102 14:54:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.102 14:54:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:03.102 14:54:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:05.633 14:54:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:05.633 14:54:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:05.633 14:54:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.633 14:54:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:05.633 14:54:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.633 14:54:28 -- common/autotest_common.sh@1197 -- # return 0 00:08:05.633 14:54:28 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:05.633 14:54:28 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:05.633 14:54:28 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:05.633 14:54:28 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:05.633 14:54:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:05.633 14:54:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:05.633 14:54:28 -- setup/common.sh@80 -- # echo 536870912 00:08:05.633 14:54:28 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:05.633 14:54:28 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:05.633 14:54:28 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:05.633 14:54:28 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:05.633 14:54:28 -- target/filesystem.sh@69 -- # partprobe 00:08:05.634 14:54:28 -- target/filesystem.sh@70 -- # sleep 1 00:08:06.568 14:54:29 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:06.568 14:54:29 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:06.568 14:54:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:06.568 14:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.568 14:54:29 -- common/autotest_common.sh@10 -- # set +x 00:08:06.568 ************************************ 00:08:06.568 START TEST filesystem_in_capsule_ext4 00:08:06.568 ************************************ 00:08:06.568 14:54:29 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:06.568 14:54:29 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:06.568 14:54:29 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.568 14:54:29 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:06.568 14:54:29 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:06.568 14:54:29 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:06.568 14:54:29 -- common/autotest_common.sh@914 -- # local i=0 00:08:06.568 14:54:29 -- common/autotest_common.sh@915 -- # local force 00:08:06.568 14:54:29 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:06.568 14:54:29 -- common/autotest_common.sh@918 -- # force=-F 00:08:06.568 14:54:29 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:06.568 mke2fs 1.47.0 (5-Feb-2023) 00:08:06.568 Discarding device blocks: 0/522240 done 00:08:06.568 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:06.568 Filesystem UUID: 8f551bc5-72e2-427e-a311-aba9ab27d640 00:08:06.568 Superblock backups stored on blocks: 00:08:06.568 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:06.568 00:08:06.568 Allocating group tables: 0/64 done 00:08:06.568 Writing inode tables: 0/64 done 00:08:06.568 Creating journal (8192 blocks): done 00:08:06.568 Writing superblocks and filesystem accounting information: 0/64 done 00:08:06.568 00:08:06.568 14:54:29 -- common/autotest_common.sh@931 -- # return 0 00:08:06.568 14:54:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.834 14:54:35 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.834 14:54:35 -- target/filesystem.sh@25 -- # sync 00:08:12.092 14:54:35 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.092 14:54:35 -- target/filesystem.sh@27 -- # sync 00:08:12.092 14:54:35 -- target/filesystem.sh@29 -- # i=0 00:08:12.092 14:54:35 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.092 14:54:35 -- target/filesystem.sh@37 -- # kill -0 72947 00:08:12.092 14:54:35 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.092 14:54:35 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.092 14:54:35 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.092 14:54:35 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.092 ************************************ 00:08:12.092 END TEST filesystem_in_capsule_ext4 00:08:12.092 ************************************ 00:08:12.092 00:08:12.092 real 0m5.661s 00:08:12.092 user 0m0.022s 00:08:12.092 sys 0m0.069s 00:08:12.092 14:54:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.092 14:54:35 -- common/autotest_common.sh@10 -- # set +x 00:08:12.092 14:54:35 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:12.092 14:54:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:12.092 14:54:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.092 14:54:35 -- common/autotest_common.sh@10 -- # set +x 00:08:12.092 ************************************ 00:08:12.092 START TEST filesystem_in_capsule_btrfs 00:08:12.092 ************************************ 00:08:12.092 14:54:35 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:12.092 14:54:35 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:12.092 14:54:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.092 14:54:35 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:12.092 14:54:35 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:12.092 14:54:35 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:12.092 14:54:35 -- common/autotest_common.sh@914 -- # local i=0 00:08:12.092 14:54:35 -- common/autotest_common.sh@915 -- # local force 00:08:12.092 14:54:35 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:12.092 14:54:35 -- common/autotest_common.sh@920 -- # force=-f 00:08:12.092 14:54:35 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:12.351 btrfs-progs v6.8.1 00:08:12.351 See https://btrfs.readthedocs.io for more information. 00:08:12.351 00:08:12.351 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:12.351 NOTE: several default settings have changed in version 5.15, please make sure 00:08:12.351 this does not affect your deployments: 00:08:12.351 - DUP for metadata (-m dup) 00:08:12.351 - enabled no-holes (-O no-holes) 00:08:12.351 - enabled free-space-tree (-R free-space-tree) 00:08:12.351 00:08:12.351 Label: (null) 00:08:12.351 UUID: aceb0764-f539-469b-9fb2-a63c304fd235 00:08:12.351 Node size: 16384 00:08:12.351 Sector size: 4096 (CPU page size: 4096) 00:08:12.351 Filesystem size: 510.00MiB 00:08:12.351 Block group profiles: 00:08:12.351 Data: single 8.00MiB 00:08:12.351 Metadata: DUP 32.00MiB 00:08:12.351 System: DUP 8.00MiB 00:08:12.351 SSD detected: yes 00:08:12.351 Zoned device: no 00:08:12.351 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:12.351 Checksum: crc32c 00:08:12.351 Number of devices: 1 00:08:12.351 Devices: 00:08:12.351 ID SIZE PATH 00:08:12.351 1 510.00MiB /dev/nvme0n1p1 00:08:12.351 00:08:12.351 14:54:35 -- common/autotest_common.sh@931 -- # return 0 00:08:12.351 14:54:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.351 14:54:35 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.351 14:54:35 -- target/filesystem.sh@25 -- # sync 00:08:12.351 14:54:35 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.351 14:54:35 -- target/filesystem.sh@27 -- # sync 00:08:12.351 14:54:35 -- target/filesystem.sh@29 -- # i=0 00:08:12.351 14:54:35 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.351 14:54:35 -- target/filesystem.sh@37 -- # kill -0 72947 00:08:12.351 14:54:35 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.351 14:54:35 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.351 14:54:35 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.351 14:54:35 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.351 00:08:12.351 real 0m0.356s 00:08:12.351 user 0m0.024s 00:08:12.351 sys 0m0.062s 00:08:12.351 14:54:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.351 14:54:35 -- common/autotest_common.sh@10 -- # set +x 00:08:12.351 ************************************ 00:08:12.351 END TEST filesystem_in_capsule_btrfs 00:08:12.351 ************************************ 00:08:12.351 14:54:35 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:12.351 14:54:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:12.351 14:54:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.351 14:54:35 -- common/autotest_common.sh@10 -- # set +x 00:08:12.351 ************************************ 00:08:12.351 START TEST filesystem_in_capsule_xfs 00:08:12.351 ************************************ 00:08:12.351 14:54:35 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:12.351 14:54:35 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:12.351 14:54:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.351 14:54:35 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:12.351 14:54:35 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:12.351 14:54:35 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:12.351 14:54:35 -- common/autotest_common.sh@914 -- # local i=0 00:08:12.351 14:54:35 -- common/autotest_common.sh@915 -- # local force 00:08:12.351 14:54:35 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:12.351 14:54:35 -- common/autotest_common.sh@920 -- # force=-f 00:08:12.351 14:54:35 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:12.610 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:12.610 = sectsz=512 attr=2, projid32bit=1 00:08:12.610 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:12.610 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:12.610 data = bsize=4096 blocks=130560, imaxpct=25 00:08:12.610 = sunit=0 swidth=0 blks 00:08:12.610 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:12.610 log =internal log bsize=4096 blocks=16384, version=2 00:08:12.610 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:12.610 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:13.177 Discarding blocks...Done. 00:08:13.177 14:54:36 -- common/autotest_common.sh@931 -- # return 0 00:08:13.177 14:54:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.081 14:54:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.081 14:54:38 -- target/filesystem.sh@25 -- # sync 00:08:15.081 14:54:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.081 14:54:38 -- target/filesystem.sh@27 -- # sync 00:08:15.081 14:54:38 -- target/filesystem.sh@29 -- # i=0 00:08:15.081 14:54:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.082 14:54:38 -- target/filesystem.sh@37 -- # kill -0 72947 00:08:15.082 14:54:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.082 14:54:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.082 14:54:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.082 14:54:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.082 00:08:15.082 real 0m2.655s 00:08:15.082 user 0m0.025s 00:08:15.082 sys 0m0.056s 00:08:15.082 14:54:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.082 14:54:38 -- common/autotest_common.sh@10 -- # set +x 00:08:15.082 ************************************ 00:08:15.082 END TEST filesystem_in_capsule_xfs 00:08:15.082 ************************************ 00:08:15.082 14:54:38 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:15.082 14:54:38 -- target/filesystem.sh@93 -- # sync 00:08:15.082 14:54:38 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.347 14:54:38 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.347 14:54:38 -- common/autotest_common.sh@1208 -- # local i=0 00:08:15.347 14:54:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:15.347 14:54:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.347 14:54:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:15.347 14:54:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.347 14:54:38 -- common/autotest_common.sh@1220 -- # return 0 00:08:15.347 14:54:38 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.347 14:54:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.347 14:54:38 -- common/autotest_common.sh@10 -- # set +x 00:08:15.347 14:54:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.347 14:54:38 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:15.347 14:54:38 -- target/filesystem.sh@101 -- # killprocess 72947 00:08:15.347 14:54:38 -- common/autotest_common.sh@936 -- # '[' -z 72947 ']' 00:08:15.347 14:54:38 -- common/autotest_common.sh@940 -- # kill -0 72947 00:08:15.347 14:54:38 -- common/autotest_common.sh@941 -- # uname 00:08:15.347 14:54:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:15.347 14:54:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72947 00:08:15.347 14:54:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:15.347 14:54:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:15.347 killing process with pid 72947 00:08:15.347 14:54:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72947' 00:08:15.347 14:54:38 -- common/autotest_common.sh@955 -- # kill 72947 00:08:15.347 14:54:38 -- common/autotest_common.sh@960 -- # wait 72947 00:08:15.946 14:54:39 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:15.946 00:08:15.946 real 0m14.282s 00:08:15.946 user 0m55.177s 00:08:15.946 sys 0m1.593s 00:08:15.946 14:54:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.946 ************************************ 00:08:15.946 14:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:15.946 END TEST nvmf_filesystem_in_capsule 00:08:15.946 ************************************ 00:08:15.946 14:54:39 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:15.946 14:54:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:15.946 14:54:39 -- nvmf/common.sh@116 -- # sync 00:08:15.946 14:54:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:15.946 14:54:39 -- nvmf/common.sh@119 -- # set +e 00:08:15.946 14:54:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:15.946 14:54:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:15.946 rmmod nvme_tcp 00:08:15.946 rmmod nvme_fabrics 00:08:15.946 rmmod nvme_keyring 00:08:15.946 14:54:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:15.946 14:54:39 -- nvmf/common.sh@123 -- # set -e 00:08:15.946 14:54:39 -- nvmf/common.sh@124 -- # return 0 00:08:15.946 14:54:39 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:15.946 14:54:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:15.946 14:54:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:15.946 14:54:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:15.946 14:54:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.946 14:54:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:15.946 14:54:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.946 14:54:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.946 14:54:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.946 14:54:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:15.946 00:08:15.946 real 0m30.048s 00:08:15.946 user 1m52.619s 00:08:15.946 sys 0m3.597s 00:08:15.946 14:54:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.946 14:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:15.946 ************************************ 00:08:15.946 END TEST nvmf_filesystem 00:08:15.946 ************************************ 00:08:15.946 14:54:39 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.946 14:54:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:15.946 14:54:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.946 14:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.206 ************************************ 00:08:16.206 START TEST nvmf_discovery 00:08:16.206 ************************************ 00:08:16.206 14:54:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:16.206 * Looking for test storage... 00:08:16.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:16.206 14:54:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:16.206 14:54:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:16.206 14:54:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:16.206 14:54:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:16.206 14:54:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:16.206 14:54:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:16.206 14:54:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:16.206 14:54:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:16.206 14:54:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:16.206 14:54:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.206 14:54:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:16.206 14:54:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:16.206 14:54:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:16.206 14:54:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:16.206 14:54:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:16.206 14:54:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:16.206 14:54:39 -- scripts/common.sh@344 -- # : 1 00:08:16.206 14:54:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:16.206 14:54:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.206 14:54:39 -- scripts/common.sh@364 -- # decimal 1 00:08:16.206 14:54:39 -- scripts/common.sh@352 -- # local d=1 00:08:16.206 14:54:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.206 14:54:39 -- scripts/common.sh@354 -- # echo 1 00:08:16.206 14:54:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:16.206 14:54:39 -- scripts/common.sh@365 -- # decimal 2 00:08:16.206 14:54:39 -- scripts/common.sh@352 -- # local d=2 00:08:16.206 14:54:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.206 14:54:39 -- scripts/common.sh@354 -- # echo 2 00:08:16.206 14:54:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:16.206 14:54:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:16.206 14:54:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:16.206 14:54:39 -- scripts/common.sh@367 -- # return 0 00:08:16.206 14:54:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.206 14:54:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:16.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.206 --rc genhtml_branch_coverage=1 00:08:16.206 --rc genhtml_function_coverage=1 00:08:16.206 --rc genhtml_legend=1 00:08:16.206 --rc geninfo_all_blocks=1 00:08:16.206 --rc geninfo_unexecuted_blocks=1 00:08:16.206 00:08:16.206 ' 00:08:16.206 14:54:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:16.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.206 --rc genhtml_branch_coverage=1 00:08:16.206 --rc genhtml_function_coverage=1 00:08:16.206 --rc genhtml_legend=1 00:08:16.206 --rc geninfo_all_blocks=1 00:08:16.206 --rc geninfo_unexecuted_blocks=1 00:08:16.206 00:08:16.206 ' 00:08:16.206 14:54:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:16.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.206 --rc genhtml_branch_coverage=1 00:08:16.206 --rc genhtml_function_coverage=1 00:08:16.206 --rc genhtml_legend=1 00:08:16.206 --rc geninfo_all_blocks=1 00:08:16.206 --rc geninfo_unexecuted_blocks=1 00:08:16.206 00:08:16.206 ' 00:08:16.206 14:54:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:16.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.206 --rc genhtml_branch_coverage=1 00:08:16.206 --rc genhtml_function_coverage=1 00:08:16.206 --rc genhtml_legend=1 00:08:16.206 --rc geninfo_all_blocks=1 00:08:16.206 --rc geninfo_unexecuted_blocks=1 00:08:16.206 00:08:16.206 ' 00:08:16.206 14:54:39 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.206 14:54:39 -- nvmf/common.sh@7 -- # uname -s 00:08:16.206 14:54:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.206 14:54:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.206 14:54:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.206 14:54:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.206 14:54:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.206 14:54:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.206 14:54:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.206 14:54:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.206 14:54:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.206 14:54:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.206 14:54:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:08:16.206 14:54:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:08:16.206 14:54:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.206 14:54:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.206 14:54:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:16.206 14:54:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.206 14:54:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.206 14:54:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.206 14:54:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.206 14:54:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.206 14:54:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.206 14:54:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.206 14:54:39 -- paths/export.sh@5 -- # export PATH 00:08:16.206 14:54:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.206 14:54:39 -- nvmf/common.sh@46 -- # : 0 00:08:16.206 14:54:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:16.206 14:54:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:16.206 14:54:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:16.206 14:54:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.206 14:54:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.206 14:54:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:16.206 14:54:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:16.206 14:54:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:16.206 14:54:39 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:16.206 14:54:39 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:16.206 14:54:39 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:16.206 14:54:39 -- target/discovery.sh@15 -- # hash nvme 00:08:16.206 14:54:39 -- target/discovery.sh@20 -- # nvmftestinit 00:08:16.206 14:54:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:16.206 14:54:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.206 14:54:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:16.206 14:54:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:16.206 14:54:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:16.206 14:54:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.206 14:54:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.206 14:54:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.206 14:54:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:16.206 14:54:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:16.206 14:54:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:16.206 14:54:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:16.206 14:54:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:16.206 14:54:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:16.206 14:54:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.206 14:54:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.206 14:54:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:16.206 14:54:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:16.206 14:54:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:16.206 14:54:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:16.206 14:54:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:16.206 14:54:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.206 14:54:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:16.206 14:54:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:16.206 14:54:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:16.206 14:54:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:16.206 14:54:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:16.206 14:54:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:16.206 Cannot find device "nvmf_tgt_br" 00:08:16.206 14:54:39 -- nvmf/common.sh@154 -- # true 00:08:16.206 14:54:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:16.465 Cannot find device "nvmf_tgt_br2" 00:08:16.465 14:54:39 -- nvmf/common.sh@155 -- # true 00:08:16.465 14:54:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:16.465 14:54:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:16.465 Cannot find device "nvmf_tgt_br" 00:08:16.465 14:54:39 -- nvmf/common.sh@157 -- # true 00:08:16.465 14:54:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:16.465 Cannot find device "nvmf_tgt_br2" 00:08:16.465 14:54:39 -- nvmf/common.sh@158 -- # true 00:08:16.465 14:54:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:16.465 14:54:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:16.465 14:54:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:16.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.465 14:54:39 -- nvmf/common.sh@161 -- # true 00:08:16.465 14:54:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:16.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:16.465 14:54:39 -- nvmf/common.sh@162 -- # true 00:08:16.465 14:54:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:16.465 14:54:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:16.466 14:54:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:16.466 14:54:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:16.466 14:54:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:16.466 14:54:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:16.466 14:54:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:16.466 14:54:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:16.466 14:54:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:16.466 14:54:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:16.466 14:54:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:16.466 14:54:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:16.466 14:54:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:16.466 14:54:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:16.466 14:54:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:16.466 14:54:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:16.466 14:54:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:16.466 14:54:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:16.466 14:54:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:16.466 14:54:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:16.466 14:54:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:16.466 14:54:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:16.724 14:54:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:16.724 14:54:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:16.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:08:16.724 00:08:16.724 --- 10.0.0.2 ping statistics --- 00:08:16.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.724 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:16.724 14:54:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:16.724 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:16.724 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:16.724 00:08:16.724 --- 10.0.0.3 ping statistics --- 00:08:16.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.724 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:16.724 14:54:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:16.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:08:16.724 00:08:16.724 --- 10.0.0.1 ping statistics --- 00:08:16.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.724 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:16.724 14:54:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.724 14:54:40 -- nvmf/common.sh@421 -- # return 0 00:08:16.724 14:54:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:16.724 14:54:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.724 14:54:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:16.724 14:54:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:16.724 14:54:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.724 14:54:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:16.725 14:54:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:16.725 14:54:40 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:16.725 14:54:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:16.725 14:54:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.725 14:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:16.725 14:54:40 -- nvmf/common.sh@469 -- # nvmfpid=73496 00:08:16.725 14:54:40 -- nvmf/common.sh@470 -- # waitforlisten 73496 00:08:16.725 14:54:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.725 14:54:40 -- common/autotest_common.sh@829 -- # '[' -z 73496 ']' 00:08:16.725 14:54:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.725 14:54:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.725 14:54:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.725 14:54:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.725 14:54:40 -- common/autotest_common.sh@10 -- # set +x 00:08:16.725 [2024-11-18 14:54:40.155833] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.725 [2024-11-18 14:54:40.155916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.725 [2024-11-18 14:54:40.303029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.983 [2024-11-18 14:54:40.369444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:16.983 [2024-11-18 14:54:40.369629] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.983 [2024-11-18 14:54:40.369645] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.983 [2024-11-18 14:54:40.369656] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.983 [2024-11-18 14:54:40.369840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.983 [2024-11-18 14:54:40.370287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.983 [2024-11-18 14:54:40.370450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.983 [2024-11-18 14:54:40.370536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.549 14:54:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.549 14:54:41 -- common/autotest_common.sh@862 -- # return 0 00:08:17.549 14:54:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:17.549 14:54:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.549 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.549 14:54:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.549 14:54:41 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.549 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.549 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.808 [2024-11-18 14:54:41.147620] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.808 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.808 14:54:41 -- target/discovery.sh@26 -- # seq 1 4 00:08:17.808 14:54:41 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.808 14:54:41 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:17.808 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.808 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.808 Null1 00:08:17.808 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.808 14:54:41 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.808 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.808 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.808 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.808 14:54:41 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:17.808 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.808 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.808 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.808 14:54:41 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.808 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.808 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.808 [2024-11-18 14:54:41.202095] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.808 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.808 14:54:41 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.808 14:54:41 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:17.808 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.808 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.808 Null2 00:08:17.808 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.808 14:54:41 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:17.808 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.808 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.808 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.808 14:54:41 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:17.808 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.808 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.808 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.809 14:54:41 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 Null3 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.809 14:54:41 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 Null4 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:17.809 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.809 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.809 14:54:41 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 4420 00:08:18.068 00:08:18.068 Discovery Log Number of Records 6, Generation counter 6 00:08:18.068 =====Discovery Log Entry 0====== 00:08:18.068 trtype: tcp 00:08:18.068 adrfam: ipv4 00:08:18.068 subtype: current discovery subsystem 00:08:18.068 treq: not required 00:08:18.068 portid: 0 00:08:18.068 trsvcid: 4420 00:08:18.068 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.068 traddr: 10.0.0.2 00:08:18.068 eflags: explicit discovery connections, duplicate discovery information 00:08:18.068 sectype: none 00:08:18.068 =====Discovery Log Entry 1====== 00:08:18.068 trtype: tcp 00:08:18.068 adrfam: ipv4 00:08:18.068 subtype: nvme subsystem 00:08:18.068 treq: not required 00:08:18.068 portid: 0 00:08:18.068 trsvcid: 4420 00:08:18.068 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:18.068 traddr: 10.0.0.2 00:08:18.068 eflags: none 00:08:18.068 sectype: none 00:08:18.068 =====Discovery Log Entry 2====== 00:08:18.068 trtype: tcp 00:08:18.068 adrfam: ipv4 00:08:18.068 subtype: nvme subsystem 00:08:18.068 treq: not required 00:08:18.068 portid: 0 00:08:18.068 trsvcid: 4420 00:08:18.068 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:18.068 traddr: 10.0.0.2 00:08:18.068 eflags: none 00:08:18.068 sectype: none 00:08:18.068 =====Discovery Log Entry 3====== 00:08:18.068 trtype: tcp 00:08:18.068 adrfam: ipv4 00:08:18.068 subtype: nvme subsystem 00:08:18.068 treq: not required 00:08:18.068 portid: 0 00:08:18.068 trsvcid: 4420 00:08:18.068 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:18.068 traddr: 10.0.0.2 00:08:18.068 eflags: none 00:08:18.068 sectype: none 00:08:18.068 =====Discovery Log Entry 4====== 00:08:18.068 trtype: tcp 00:08:18.068 adrfam: ipv4 00:08:18.068 subtype: nvme subsystem 00:08:18.068 treq: not required 00:08:18.068 portid: 0 00:08:18.068 trsvcid: 4420 00:08:18.068 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:18.068 traddr: 10.0.0.2 00:08:18.068 eflags: none 00:08:18.068 sectype: none 00:08:18.068 =====Discovery Log Entry 5====== 00:08:18.068 trtype: tcp 00:08:18.068 adrfam: ipv4 00:08:18.068 subtype: discovery subsystem referral 00:08:18.068 treq: not required 00:08:18.068 portid: 0 00:08:18.068 trsvcid: 4430 00:08:18.068 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.068 traddr: 10.0.0.2 00:08:18.068 eflags: none 00:08:18.068 sectype: none 00:08:18.068 Perform nvmf subsystem discovery via RPC 00:08:18.068 14:54:41 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:18.068 14:54:41 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:18.068 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.068 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.068 [2024-11-18 14:54:41.434290] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:18.068 [ 00:08:18.068 { 00:08:18.068 "allow_any_host": true, 00:08:18.068 "hosts": [], 00:08:18.068 "listen_addresses": [ 00:08:18.068 { 00:08:18.068 "adrfam": "IPv4", 00:08:18.068 "traddr": "10.0.0.2", 00:08:18.068 "transport": "TCP", 00:08:18.068 "trsvcid": "4420", 00:08:18.068 "trtype": "TCP" 00:08:18.068 } 00:08:18.068 ], 00:08:18.068 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:18.068 "subtype": "Discovery" 00:08:18.068 }, 00:08:18.068 { 00:08:18.068 "allow_any_host": true, 00:08:18.068 "hosts": [], 00:08:18.068 "listen_addresses": [ 00:08:18.068 { 00:08:18.068 "adrfam": "IPv4", 00:08:18.068 "traddr": "10.0.0.2", 00:08:18.068 "transport": "TCP", 00:08:18.068 "trsvcid": "4420", 00:08:18.068 "trtype": "TCP" 00:08:18.068 } 00:08:18.068 ], 00:08:18.068 "max_cntlid": 65519, 00:08:18.068 "max_namespaces": 32, 00:08:18.068 "min_cntlid": 1, 00:08:18.068 "model_number": "SPDK bdev Controller", 00:08:18.068 "namespaces": [ 00:08:18.068 { 00:08:18.068 "bdev_name": "Null1", 00:08:18.068 "name": "Null1", 00:08:18.068 "nguid": "7ADA48E265BB4F4496A199B1AD40EA3C", 00:08:18.068 "nsid": 1, 00:08:18.068 "uuid": "7ada48e2-65bb-4f44-96a1-99b1ad40ea3c" 00:08:18.068 } 00:08:18.068 ], 00:08:18.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.068 "serial_number": "SPDK00000000000001", 00:08:18.068 "subtype": "NVMe" 00:08:18.068 }, 00:08:18.068 { 00:08:18.068 "allow_any_host": true, 00:08:18.068 "hosts": [], 00:08:18.068 "listen_addresses": [ 00:08:18.068 { 00:08:18.068 "adrfam": "IPv4", 00:08:18.068 "traddr": "10.0.0.2", 00:08:18.068 "transport": "TCP", 00:08:18.068 "trsvcid": "4420", 00:08:18.068 "trtype": "TCP" 00:08:18.068 } 00:08:18.068 ], 00:08:18.068 "max_cntlid": 65519, 00:08:18.068 "max_namespaces": 32, 00:08:18.068 "min_cntlid": 1, 00:08:18.068 "model_number": "SPDK bdev Controller", 00:08:18.068 "namespaces": [ 00:08:18.068 { 00:08:18.068 "bdev_name": "Null2", 00:08:18.068 "name": "Null2", 00:08:18.068 "nguid": "E7EC8EEE63F0438FA407C3299F195099", 00:08:18.068 "nsid": 1, 00:08:18.068 "uuid": "e7ec8eee-63f0-438f-a407-c3299f195099" 00:08:18.068 } 00:08:18.068 ], 00:08:18.068 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:18.068 "serial_number": "SPDK00000000000002", 00:08:18.068 "subtype": "NVMe" 00:08:18.068 }, 00:08:18.068 { 00:08:18.068 "allow_any_host": true, 00:08:18.068 "hosts": [], 00:08:18.068 "listen_addresses": [ 00:08:18.068 { 00:08:18.068 "adrfam": "IPv4", 00:08:18.068 "traddr": "10.0.0.2", 00:08:18.068 "transport": "TCP", 00:08:18.068 "trsvcid": "4420", 00:08:18.068 "trtype": "TCP" 00:08:18.068 } 00:08:18.068 ], 00:08:18.068 "max_cntlid": 65519, 00:08:18.068 "max_namespaces": 32, 00:08:18.068 "min_cntlid": 1, 00:08:18.069 "model_number": "SPDK bdev Controller", 00:08:18.069 "namespaces": [ 00:08:18.069 { 00:08:18.069 "bdev_name": "Null3", 00:08:18.069 "name": "Null3", 00:08:18.069 "nguid": "FEC943B73D164008B8A2C44952DAC5D5", 00:08:18.069 "nsid": 1, 00:08:18.069 "uuid": "fec943b7-3d16-4008-b8a2-c44952dac5d5" 00:08:18.069 } 00:08:18.069 ], 00:08:18.069 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:18.069 "serial_number": "SPDK00000000000003", 00:08:18.069 "subtype": "NVMe" 00:08:18.069 }, 00:08:18.069 { 00:08:18.069 "allow_any_host": true, 00:08:18.069 "hosts": [], 00:08:18.069 "listen_addresses": [ 00:08:18.069 { 00:08:18.069 "adrfam": "IPv4", 00:08:18.069 "traddr": "10.0.0.2", 00:08:18.069 "transport": "TCP", 00:08:18.069 "trsvcid": "4420", 00:08:18.069 "trtype": "TCP" 00:08:18.069 } 00:08:18.069 ], 00:08:18.069 "max_cntlid": 65519, 00:08:18.069 "max_namespaces": 32, 00:08:18.069 "min_cntlid": 1, 00:08:18.069 "model_number": "SPDK bdev Controller", 00:08:18.069 "namespaces": [ 00:08:18.069 { 00:08:18.069 "bdev_name": "Null4", 00:08:18.069 "name": "Null4", 00:08:18.069 "nguid": "ECA2C8657F52423AACB6E158BBFDF278", 00:08:18.069 "nsid": 1, 00:08:18.069 "uuid": "eca2c865-7f52-423a-acb6-e158bbfdf278" 00:08:18.069 } 00:08:18.069 ], 00:08:18.069 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:18.069 "serial_number": "SPDK00000000000004", 00:08:18.069 "subtype": "NVMe" 00:08:18.069 } 00:08:18.069 ] 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@42 -- # seq 1 4 00:08:18.069 14:54:41 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.069 14:54:41 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.069 14:54:41 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.069 14:54:41 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:18.069 14:54:41 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:18.069 14:54:41 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:18.069 14:54:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.069 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 14:54:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.069 14:54:41 -- target/discovery.sh@49 -- # check_bdevs= 00:08:18.069 14:54:41 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:18.069 14:54:41 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:18.069 14:54:41 -- target/discovery.sh@57 -- # nvmftestfini 00:08:18.069 14:54:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:18.069 14:54:41 -- nvmf/common.sh@116 -- # sync 00:08:18.069 14:54:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:18.069 14:54:41 -- nvmf/common.sh@119 -- # set +e 00:08:18.069 14:54:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:18.069 14:54:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:18.069 rmmod nvme_tcp 00:08:18.069 rmmod nvme_fabrics 00:08:18.069 rmmod nvme_keyring 00:08:18.328 14:54:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:18.328 14:54:41 -- nvmf/common.sh@123 -- # set -e 00:08:18.328 14:54:41 -- nvmf/common.sh@124 -- # return 0 00:08:18.328 14:54:41 -- nvmf/common.sh@477 -- # '[' -n 73496 ']' 00:08:18.328 14:54:41 -- nvmf/common.sh@478 -- # killprocess 73496 00:08:18.328 14:54:41 -- common/autotest_common.sh@936 -- # '[' -z 73496 ']' 00:08:18.328 14:54:41 -- common/autotest_common.sh@940 -- # kill -0 73496 00:08:18.328 14:54:41 -- common/autotest_common.sh@941 -- # uname 00:08:18.328 14:54:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:18.328 14:54:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73496 00:08:18.328 14:54:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:18.328 14:54:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:18.328 killing process with pid 73496 00:08:18.328 14:54:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73496' 00:08:18.328 14:54:41 -- common/autotest_common.sh@955 -- # kill 73496 00:08:18.328 [2024-11-18 14:54:41.705517] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:18.328 14:54:41 -- common/autotest_common.sh@960 -- # wait 73496 00:08:18.328 14:54:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:18.328 14:54:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:18.328 14:54:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:18.328 14:54:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.328 14:54:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:18.328 14:54:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.328 14:54:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.328 14:54:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.587 14:54:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:18.587 ************************************ 00:08:18.587 END TEST nvmf_discovery 00:08:18.587 00:08:18.587 real 0m2.399s 00:08:18.587 user 0m6.456s 00:08:18.587 sys 0m0.656s 00:08:18.587 14:54:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.587 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.587 ************************************ 00:08:18.587 14:54:41 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:18.587 14:54:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:18.587 14:54:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.587 14:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:18.587 ************************************ 00:08:18.587 START TEST nvmf_referrals 00:08:18.587 ************************************ 00:08:18.587 14:54:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:18.587 * Looking for test storage... 00:08:18.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.587 14:54:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:18.587 14:54:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:18.587 14:54:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:18.587 14:54:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:18.587 14:54:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:18.587 14:54:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:18.587 14:54:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:18.587 14:54:42 -- scripts/common.sh@335 -- # IFS=.-: 00:08:18.587 14:54:42 -- scripts/common.sh@335 -- # read -ra ver1 00:08:18.587 14:54:42 -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.587 14:54:42 -- scripts/common.sh@336 -- # read -ra ver2 00:08:18.587 14:54:42 -- scripts/common.sh@337 -- # local 'op=<' 00:08:18.587 14:54:42 -- scripts/common.sh@339 -- # ver1_l=2 00:08:18.587 14:54:42 -- scripts/common.sh@340 -- # ver2_l=1 00:08:18.587 14:54:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:18.587 14:54:42 -- scripts/common.sh@343 -- # case "$op" in 00:08:18.587 14:54:42 -- scripts/common.sh@344 -- # : 1 00:08:18.587 14:54:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:18.587 14:54:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.587 14:54:42 -- scripts/common.sh@364 -- # decimal 1 00:08:18.587 14:54:42 -- scripts/common.sh@352 -- # local d=1 00:08:18.587 14:54:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.587 14:54:42 -- scripts/common.sh@354 -- # echo 1 00:08:18.587 14:54:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:18.587 14:54:42 -- scripts/common.sh@365 -- # decimal 2 00:08:18.587 14:54:42 -- scripts/common.sh@352 -- # local d=2 00:08:18.587 14:54:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.587 14:54:42 -- scripts/common.sh@354 -- # echo 2 00:08:18.587 14:54:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:18.587 14:54:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:18.587 14:54:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:18.587 14:54:42 -- scripts/common.sh@367 -- # return 0 00:08:18.587 14:54:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.587 14:54:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:18.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.587 --rc genhtml_branch_coverage=1 00:08:18.587 --rc genhtml_function_coverage=1 00:08:18.587 --rc genhtml_legend=1 00:08:18.587 --rc geninfo_all_blocks=1 00:08:18.587 --rc geninfo_unexecuted_blocks=1 00:08:18.587 00:08:18.587 ' 00:08:18.587 14:54:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:18.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.587 --rc genhtml_branch_coverage=1 00:08:18.587 --rc genhtml_function_coverage=1 00:08:18.587 --rc genhtml_legend=1 00:08:18.587 --rc geninfo_all_blocks=1 00:08:18.587 --rc geninfo_unexecuted_blocks=1 00:08:18.587 00:08:18.587 ' 00:08:18.587 14:54:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:18.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.587 --rc genhtml_branch_coverage=1 00:08:18.587 --rc genhtml_function_coverage=1 00:08:18.587 --rc genhtml_legend=1 00:08:18.587 --rc geninfo_all_blocks=1 00:08:18.587 --rc geninfo_unexecuted_blocks=1 00:08:18.587 00:08:18.587 ' 00:08:18.587 14:54:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:18.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.587 --rc genhtml_branch_coverage=1 00:08:18.587 --rc genhtml_function_coverage=1 00:08:18.587 --rc genhtml_legend=1 00:08:18.587 --rc geninfo_all_blocks=1 00:08:18.587 --rc geninfo_unexecuted_blocks=1 00:08:18.587 00:08:18.587 ' 00:08:18.587 14:54:42 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.587 14:54:42 -- nvmf/common.sh@7 -- # uname -s 00:08:18.847 14:54:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.847 14:54:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.847 14:54:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.847 14:54:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.847 14:54:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.847 14:54:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.847 14:54:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.847 14:54:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.847 14:54:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.847 14:54:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.847 14:54:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:08:18.847 14:54:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:08:18.847 14:54:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.847 14:54:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.847 14:54:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:18.847 14:54:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.847 14:54:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.847 14:54:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.847 14:54:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.847 14:54:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.847 14:54:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.847 14:54:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.847 14:54:42 -- paths/export.sh@5 -- # export PATH 00:08:18.847 14:54:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.847 14:54:42 -- nvmf/common.sh@46 -- # : 0 00:08:18.847 14:54:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:18.847 14:54:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:18.847 14:54:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:18.847 14:54:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.847 14:54:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.847 14:54:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:18.847 14:54:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:18.847 14:54:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:18.847 14:54:42 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:18.847 14:54:42 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:18.847 14:54:42 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:18.847 14:54:42 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:18.847 14:54:42 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:18.847 14:54:42 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:18.847 14:54:42 -- target/referrals.sh@37 -- # nvmftestinit 00:08:18.847 14:54:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:18.847 14:54:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.847 14:54:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:18.847 14:54:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:18.847 14:54:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:18.847 14:54:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.847 14:54:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.847 14:54:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.847 14:54:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:18.847 14:54:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:18.847 14:54:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:18.847 14:54:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:18.847 14:54:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:18.847 14:54:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:18.847 14:54:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.847 14:54:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.847 14:54:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:18.847 14:54:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:18.847 14:54:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:18.847 14:54:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:18.847 14:54:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:18.847 14:54:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.847 14:54:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:18.847 14:54:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:18.847 14:54:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:18.847 14:54:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:18.847 14:54:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:18.847 14:54:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:18.847 Cannot find device "nvmf_tgt_br" 00:08:18.847 14:54:42 -- nvmf/common.sh@154 -- # true 00:08:18.847 14:54:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.847 Cannot find device "nvmf_tgt_br2" 00:08:18.847 14:54:42 -- nvmf/common.sh@155 -- # true 00:08:18.847 14:54:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:18.847 14:54:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:18.847 Cannot find device "nvmf_tgt_br" 00:08:18.847 14:54:42 -- nvmf/common.sh@157 -- # true 00:08:18.847 14:54:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:18.847 Cannot find device "nvmf_tgt_br2" 00:08:18.847 14:54:42 -- nvmf/common.sh@158 -- # true 00:08:18.847 14:54:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:18.847 14:54:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:18.847 14:54:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.847 14:54:42 -- nvmf/common.sh@161 -- # true 00:08:18.847 14:54:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.847 14:54:42 -- nvmf/common.sh@162 -- # true 00:08:18.847 14:54:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:18.847 14:54:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:18.847 14:54:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:18.847 14:54:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:18.847 14:54:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:18.847 14:54:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:18.847 14:54:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:18.847 14:54:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:18.847 14:54:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:18.847 14:54:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:19.106 14:54:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:19.106 14:54:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:19.106 14:54:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:19.106 14:54:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.106 14:54:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.106 14:54:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.106 14:54:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:19.106 14:54:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:19.106 14:54:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.106 14:54:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.106 14:54:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.106 14:54:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.106 14:54:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.106 14:54:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:19.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:08:19.106 00:08:19.106 --- 10.0.0.2 ping statistics --- 00:08:19.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.106 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:19.106 14:54:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:19.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:08:19.107 00:08:19.107 --- 10.0.0.3 ping statistics --- 00:08:19.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.107 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:19.107 14:54:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:19.107 00:08:19.107 --- 10.0.0.1 ping statistics --- 00:08:19.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.107 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:19.107 14:54:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.107 14:54:42 -- nvmf/common.sh@421 -- # return 0 00:08:19.107 14:54:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:19.107 14:54:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.107 14:54:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:19.107 14:54:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:19.107 14:54:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.107 14:54:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:19.107 14:54:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:19.107 14:54:42 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:19.107 14:54:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:19.107 14:54:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.107 14:54:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.107 14:54:42 -- nvmf/common.sh@469 -- # nvmfpid=73738 00:08:19.107 14:54:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.107 14:54:42 -- nvmf/common.sh@470 -- # waitforlisten 73738 00:08:19.107 14:54:42 -- common/autotest_common.sh@829 -- # '[' -z 73738 ']' 00:08:19.107 14:54:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.107 14:54:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.107 14:54:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.107 14:54:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.107 14:54:42 -- common/autotest_common.sh@10 -- # set +x 00:08:19.107 [2024-11-18 14:54:42.618079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:19.107 [2024-11-18 14:54:42.618162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.366 [2024-11-18 14:54:42.756964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.366 [2024-11-18 14:54:42.823085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.366 [2024-11-18 14:54:42.823221] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.366 [2024-11-18 14:54:42.823234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.366 [2024-11-18 14:54:42.823242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.366 [2024-11-18 14:54:42.823808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.366 [2024-11-18 14:54:42.823964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.366 [2024-11-18 14:54:42.824402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.366 [2024-11-18 14:54:42.824407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.303 14:54:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.303 14:54:43 -- common/autotest_common.sh@862 -- # return 0 00:08:20.303 14:54:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:20.303 14:54:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.303 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.303 14:54:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.303 14:54:43 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.303 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.303 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.303 [2024-11-18 14:54:43.694735] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.303 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.303 14:54:43 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:20.303 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.303 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.303 [2024-11-18 14:54:43.727207] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:20.303 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.303 14:54:43 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:20.303 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.303 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.303 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.303 14:54:43 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:20.303 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.303 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.303 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.303 14:54:43 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:20.303 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.303 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.303 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.303 14:54:43 -- target/referrals.sh@48 -- # jq length 00:08:20.303 14:54:43 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.303 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.303 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.303 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.303 14:54:43 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:20.303 14:54:43 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:20.303 14:54:43 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.303 14:54:43 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.303 14:54:43 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.303 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.303 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.304 14:54:43 -- target/referrals.sh@21 -- # sort 00:08:20.304 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.304 14:54:43 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:20.304 14:54:43 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:20.304 14:54:43 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:20.304 14:54:43 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.304 14:54:43 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.304 14:54:43 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.304 14:54:43 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.304 14:54:43 -- target/referrals.sh@26 -- # sort 00:08:20.562 14:54:43 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:20.562 14:54:43 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:20.562 14:54:43 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:20.562 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.562 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.562 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.562 14:54:43 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:20.562 14:54:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.562 14:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:20.562 14:54:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.562 14:54:43 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:20.562 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.562 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:20.562 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.562 14:54:44 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.562 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.562 14:54:44 -- target/referrals.sh@56 -- # jq length 00:08:20.562 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:20.562 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.562 14:54:44 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:20.562 14:54:44 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:20.562 14:54:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.562 14:54:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.562 14:54:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.562 14:54:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.563 14:54:44 -- target/referrals.sh@26 -- # sort 00:08:20.821 14:54:44 -- target/referrals.sh@26 -- # echo 00:08:20.822 14:54:44 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:20.822 14:54:44 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:20.822 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.822 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:20.822 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.822 14:54:44 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:20.822 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.822 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:20.822 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.822 14:54:44 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:20.822 14:54:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.822 14:54:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.822 14:54:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.822 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.822 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:20.822 14:54:44 -- target/referrals.sh@21 -- # sort 00:08:20.822 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.822 14:54:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:20.822 14:54:44 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:20.822 14:54:44 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:20.822 14:54:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.822 14:54:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.822 14:54:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.822 14:54:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.822 14:54:44 -- target/referrals.sh@26 -- # sort 00:08:20.822 14:54:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:20.822 14:54:44 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:20.822 14:54:44 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:20.822 14:54:44 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:20.822 14:54:44 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:20.822 14:54:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.822 14:54:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:21.081 14:54:44 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:21.081 14:54:44 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:21.081 14:54:44 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:21.081 14:54:44 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:21.081 14:54:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.081 14:54:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:21.081 14:54:44 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:21.081 14:54:44 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:21.081 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.081 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:21.081 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.081 14:54:44 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:21.081 14:54:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:21.081 14:54:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.081 14:54:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.081 14:54:44 -- target/referrals.sh@21 -- # sort 00:08:21.081 14:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:21.081 14:54:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:21.081 14:54:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.340 14:54:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:21.340 14:54:44 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:21.340 14:54:44 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:21.340 14:54:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.340 14:54:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.340 14:54:44 -- target/referrals.sh@26 -- # sort 00:08:21.340 14:54:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.340 14:54:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.340 14:54:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:21.340 14:54:44 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:21.340 14:54:44 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:21.340 14:54:44 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:21.340 14:54:44 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:21.340 14:54:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.340 14:54:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:21.340 14:54:44 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:21.340 14:54:44 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:21.340 14:54:44 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:21.340 14:54:44 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:21.340 14:54:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.340 14:54:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:21.599 14:54:45 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:21.599 14:54:45 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:21.599 14:54:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.599 14:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:21.599 14:54:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.599 14:54:45 -- target/referrals.sh@82 -- # jq length 00:08:21.599 14:54:45 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.599 14:54:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.599 14:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:21.599 14:54:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.599 14:54:45 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:21.599 14:54:45 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:21.599 14:54:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.599 14:54:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.599 14:54:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.599 14:54:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.599 14:54:45 -- target/referrals.sh@26 -- # sort 00:08:21.858 14:54:45 -- target/referrals.sh@26 -- # echo 00:08:21.858 14:54:45 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:21.858 14:54:45 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:21.858 14:54:45 -- target/referrals.sh@86 -- # nvmftestfini 00:08:21.858 14:54:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:21.858 14:54:45 -- nvmf/common.sh@116 -- # sync 00:08:21.858 14:54:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:21.858 14:54:45 -- nvmf/common.sh@119 -- # set +e 00:08:21.858 14:54:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:21.858 14:54:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:21.858 rmmod nvme_tcp 00:08:21.858 rmmod nvme_fabrics 00:08:21.858 rmmod nvme_keyring 00:08:21.858 14:54:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:21.858 14:54:45 -- nvmf/common.sh@123 -- # set -e 00:08:21.858 14:54:45 -- nvmf/common.sh@124 -- # return 0 00:08:21.858 14:54:45 -- nvmf/common.sh@477 -- # '[' -n 73738 ']' 00:08:21.858 14:54:45 -- nvmf/common.sh@478 -- # killprocess 73738 00:08:21.858 14:54:45 -- common/autotest_common.sh@936 -- # '[' -z 73738 ']' 00:08:21.858 14:54:45 -- common/autotest_common.sh@940 -- # kill -0 73738 00:08:21.858 14:54:45 -- common/autotest_common.sh@941 -- # uname 00:08:21.858 14:54:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.858 14:54:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73738 00:08:21.858 14:54:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:21.858 14:54:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:21.858 killing process with pid 73738 00:08:21.858 14:54:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73738' 00:08:21.858 14:54:45 -- common/autotest_common.sh@955 -- # kill 73738 00:08:21.858 14:54:45 -- common/autotest_common.sh@960 -- # wait 73738 00:08:22.117 14:54:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:22.117 14:54:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:22.117 14:54:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:22.117 14:54:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.117 14:54:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:22.117 14:54:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.117 14:54:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.117 14:54:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.117 14:54:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:22.117 00:08:22.117 real 0m3.621s 00:08:22.117 user 0m12.042s 00:08:22.117 sys 0m0.890s 00:08:22.117 14:54:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.117 14:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:22.117 ************************************ 00:08:22.117 END TEST nvmf_referrals 00:08:22.117 ************************************ 00:08:22.117 14:54:45 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:22.117 14:54:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:22.117 14:54:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.117 14:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:22.117 ************************************ 00:08:22.117 START TEST nvmf_connect_disconnect 00:08:22.117 ************************************ 00:08:22.117 14:54:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:22.377 * Looking for test storage... 00:08:22.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:22.377 14:54:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:22.377 14:54:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:22.377 14:54:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:22.377 14:54:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:22.377 14:54:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:22.377 14:54:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:22.377 14:54:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:22.377 14:54:45 -- scripts/common.sh@335 -- # IFS=.-: 00:08:22.377 14:54:45 -- scripts/common.sh@335 -- # read -ra ver1 00:08:22.377 14:54:45 -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.377 14:54:45 -- scripts/common.sh@336 -- # read -ra ver2 00:08:22.377 14:54:45 -- scripts/common.sh@337 -- # local 'op=<' 00:08:22.377 14:54:45 -- scripts/common.sh@339 -- # ver1_l=2 00:08:22.377 14:54:45 -- scripts/common.sh@340 -- # ver2_l=1 00:08:22.377 14:54:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:22.377 14:54:45 -- scripts/common.sh@343 -- # case "$op" in 00:08:22.377 14:54:45 -- scripts/common.sh@344 -- # : 1 00:08:22.377 14:54:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:22.377 14:54:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.377 14:54:45 -- scripts/common.sh@364 -- # decimal 1 00:08:22.377 14:54:45 -- scripts/common.sh@352 -- # local d=1 00:08:22.377 14:54:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.377 14:54:45 -- scripts/common.sh@354 -- # echo 1 00:08:22.377 14:54:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:22.377 14:54:45 -- scripts/common.sh@365 -- # decimal 2 00:08:22.377 14:54:45 -- scripts/common.sh@352 -- # local d=2 00:08:22.377 14:54:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.377 14:54:45 -- scripts/common.sh@354 -- # echo 2 00:08:22.377 14:54:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:22.377 14:54:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:22.377 14:54:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:22.377 14:54:45 -- scripts/common.sh@367 -- # return 0 00:08:22.377 14:54:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.377 14:54:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.377 --rc genhtml_branch_coverage=1 00:08:22.377 --rc genhtml_function_coverage=1 00:08:22.377 --rc genhtml_legend=1 00:08:22.377 --rc geninfo_all_blocks=1 00:08:22.377 --rc geninfo_unexecuted_blocks=1 00:08:22.377 00:08:22.377 ' 00:08:22.377 14:54:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.377 --rc genhtml_branch_coverage=1 00:08:22.377 --rc genhtml_function_coverage=1 00:08:22.377 --rc genhtml_legend=1 00:08:22.377 --rc geninfo_all_blocks=1 00:08:22.377 --rc geninfo_unexecuted_blocks=1 00:08:22.377 00:08:22.377 ' 00:08:22.377 14:54:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.377 --rc genhtml_branch_coverage=1 00:08:22.377 --rc genhtml_function_coverage=1 00:08:22.377 --rc genhtml_legend=1 00:08:22.377 --rc geninfo_all_blocks=1 00:08:22.377 --rc geninfo_unexecuted_blocks=1 00:08:22.377 00:08:22.377 ' 00:08:22.377 14:54:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.377 --rc genhtml_branch_coverage=1 00:08:22.377 --rc genhtml_function_coverage=1 00:08:22.377 --rc genhtml_legend=1 00:08:22.377 --rc geninfo_all_blocks=1 00:08:22.377 --rc geninfo_unexecuted_blocks=1 00:08:22.377 00:08:22.377 ' 00:08:22.377 14:54:45 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:22.377 14:54:45 -- nvmf/common.sh@7 -- # uname -s 00:08:22.377 14:54:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.377 14:54:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.377 14:54:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.377 14:54:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.377 14:54:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.377 14:54:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.377 14:54:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.377 14:54:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.377 14:54:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.377 14:54:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.377 14:54:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:08:22.377 14:54:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:08:22.377 14:54:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.377 14:54:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.377 14:54:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:22.377 14:54:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.377 14:54:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.377 14:54:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.377 14:54:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.377 14:54:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.377 14:54:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.377 14:54:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.377 14:54:45 -- paths/export.sh@5 -- # export PATH 00:08:22.377 14:54:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.377 14:54:45 -- nvmf/common.sh@46 -- # : 0 00:08:22.377 14:54:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:22.377 14:54:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:22.377 14:54:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:22.377 14:54:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.377 14:54:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.377 14:54:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:22.377 14:54:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:22.377 14:54:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:22.377 14:54:45 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.377 14:54:45 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.377 14:54:45 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:22.377 14:54:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:22.377 14:54:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.377 14:54:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:22.377 14:54:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:22.377 14:54:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:22.377 14:54:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.377 14:54:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.377 14:54:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.377 14:54:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:22.377 14:54:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:22.377 14:54:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:22.377 14:54:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:22.377 14:54:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:22.377 14:54:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:22.377 14:54:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.377 14:54:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.377 14:54:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:22.377 14:54:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:22.377 14:54:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:22.377 14:54:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:22.377 14:54:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:22.377 14:54:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.377 14:54:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:22.377 14:54:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:22.377 14:54:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:22.377 14:54:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:22.377 14:54:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:22.377 14:54:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:22.377 Cannot find device "nvmf_tgt_br" 00:08:22.378 14:54:45 -- nvmf/common.sh@154 -- # true 00:08:22.378 14:54:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.378 Cannot find device "nvmf_tgt_br2" 00:08:22.378 14:54:45 -- nvmf/common.sh@155 -- # true 00:08:22.378 14:54:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:22.378 14:54:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:22.378 Cannot find device "nvmf_tgt_br" 00:08:22.378 14:54:45 -- nvmf/common.sh@157 -- # true 00:08:22.378 14:54:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:22.378 Cannot find device "nvmf_tgt_br2" 00:08:22.378 14:54:45 -- nvmf/common.sh@158 -- # true 00:08:22.378 14:54:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:22.378 14:54:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:22.636 14:54:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:22.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.636 14:54:45 -- nvmf/common.sh@161 -- # true 00:08:22.636 14:54:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:22.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.636 14:54:45 -- nvmf/common.sh@162 -- # true 00:08:22.636 14:54:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:22.636 14:54:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:22.637 14:54:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:22.637 14:54:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:22.637 14:54:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:22.637 14:54:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:22.637 14:54:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:22.637 14:54:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:22.637 14:54:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:22.637 14:54:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:22.637 14:54:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:22.637 14:54:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:22.637 14:54:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:22.637 14:54:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:22.637 14:54:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:22.637 14:54:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:22.637 14:54:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:22.637 14:54:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:22.637 14:54:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:22.637 14:54:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:22.637 14:54:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:22.637 14:54:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:22.637 14:54:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:22.637 14:54:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:22.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:08:22.637 00:08:22.637 --- 10.0.0.2 ping statistics --- 00:08:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.637 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:22.637 14:54:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:22.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:22.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:22.637 00:08:22.637 --- 10.0.0.3 ping statistics --- 00:08:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.637 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:22.637 14:54:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:22.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:22.637 00:08:22.637 --- 10.0.0.1 ping statistics --- 00:08:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.637 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:22.637 14:54:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.637 14:54:46 -- nvmf/common.sh@421 -- # return 0 00:08:22.637 14:54:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:22.637 14:54:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.637 14:54:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:22.637 14:54:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:22.637 14:54:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.637 14:54:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:22.637 14:54:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:22.637 14:54:46 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:22.637 14:54:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:22.637 14:54:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.637 14:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.637 14:54:46 -- nvmf/common.sh@469 -- # nvmfpid=74053 00:08:22.637 14:54:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.637 14:54:46 -- nvmf/common.sh@470 -- # waitforlisten 74053 00:08:22.637 14:54:46 -- common/autotest_common.sh@829 -- # '[' -z 74053 ']' 00:08:22.637 14:54:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.637 14:54:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.637 14:54:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.637 14:54:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.637 14:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.896 [2024-11-18 14:54:46.242452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.896 [2024-11-18 14:54:46.242538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.896 [2024-11-18 14:54:46.381962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.896 [2024-11-18 14:54:46.441095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.896 [2024-11-18 14:54:46.441213] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.896 [2024-11-18 14:54:46.441224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.896 [2024-11-18 14:54:46.441231] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.896 [2024-11-18 14:54:46.441404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.896 [2024-11-18 14:54:46.442038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.896 [2024-11-18 14:54:46.442220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.896 [2024-11-18 14:54:46.442410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.832 14:54:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.832 14:54:47 -- common/autotest_common.sh@862 -- # return 0 00:08:23.832 14:54:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:23.832 14:54:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.832 14:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.832 14:54:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.832 14:54:47 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:23.832 14:54:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.832 14:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.832 [2024-11-18 14:54:47.307608] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.832 14:54:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.832 14:54:47 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:23.832 14:54:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.832 14:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.832 14:54:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.832 14:54:47 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:23.832 14:54:47 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.832 14:54:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.832 14:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.832 14:54:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.832 14:54:47 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.832 14:54:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.832 14:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.832 14:54:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.832 14:54:47 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.832 14:54:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.832 14:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:23.832 [2024-11-18 14:54:47.371157] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.833 14:54:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.833 14:54:47 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:23.833 14:54:47 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:23.833 14:54:47 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:23.833 14:54:47 -- target/connect_disconnect.sh@34 -- # set +x 00:08:26.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.563 14:58:32 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:08.563 14:58:32 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:08.563 14:58:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:08.563 14:58:32 -- nvmf/common.sh@116 -- # sync 00:12:08.563 14:58:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:08.563 14:58:32 -- nvmf/common.sh@119 -- # set +e 00:12:08.563 14:58:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:08.563 14:58:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:08.563 rmmod nvme_tcp 00:12:08.563 rmmod nvme_fabrics 00:12:08.563 rmmod nvme_keyring 00:12:08.563 14:58:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:08.563 14:58:32 -- nvmf/common.sh@123 -- # set -e 00:12:08.563 14:58:32 -- nvmf/common.sh@124 -- # return 0 00:12:08.563 14:58:32 -- nvmf/common.sh@477 -- # '[' -n 74053 ']' 00:12:08.563 14:58:32 -- nvmf/common.sh@478 -- # killprocess 74053 00:12:08.563 14:58:32 -- common/autotest_common.sh@936 -- # '[' -z 74053 ']' 00:12:08.563 14:58:32 -- common/autotest_common.sh@940 -- # kill -0 74053 00:12:08.563 14:58:32 -- common/autotest_common.sh@941 -- # uname 00:12:08.563 14:58:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:08.563 14:58:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74053 00:12:08.822 14:58:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:08.822 14:58:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:08.822 killing process with pid 74053 00:12:08.822 14:58:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74053' 00:12:08.822 14:58:32 -- common/autotest_common.sh@955 -- # kill 74053 00:12:08.822 14:58:32 -- common/autotest_common.sh@960 -- # wait 74053 00:12:08.822 14:58:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:08.822 14:58:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:08.822 14:58:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:08.823 14:58:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:08.823 14:58:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:08.823 14:58:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.823 14:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.823 14:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.082 14:58:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:09.082 ************************************ 00:12:09.082 END TEST nvmf_connect_disconnect 00:12:09.082 ************************************ 00:12:09.082 00:12:09.082 real 3m46.757s 00:12:09.082 user 14m48.208s 00:12:09.082 sys 0m18.064s 00:12:09.082 14:58:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.082 14:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:09.082 14:58:32 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:09.082 14:58:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:09.082 14:58:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.082 14:58:32 -- common/autotest_common.sh@10 -- # set +x 00:12:09.082 ************************************ 00:12:09.082 START TEST nvmf_multitarget 00:12:09.082 ************************************ 00:12:09.082 14:58:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:09.082 * Looking for test storage... 00:12:09.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:09.082 14:58:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:09.082 14:58:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:09.082 14:58:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:09.082 14:58:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:09.082 14:58:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:09.082 14:58:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:09.082 14:58:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:09.082 14:58:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:09.082 14:58:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:09.082 14:58:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.082 14:58:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:09.082 14:58:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:09.082 14:58:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:09.082 14:58:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:09.082 14:58:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:09.082 14:58:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:09.082 14:58:32 -- scripts/common.sh@344 -- # : 1 00:12:09.082 14:58:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:09.082 14:58:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.082 14:58:32 -- scripts/common.sh@364 -- # decimal 1 00:12:09.082 14:58:32 -- scripts/common.sh@352 -- # local d=1 00:12:09.082 14:58:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.082 14:58:32 -- scripts/common.sh@354 -- # echo 1 00:12:09.082 14:58:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:09.082 14:58:32 -- scripts/common.sh@365 -- # decimal 2 00:12:09.082 14:58:32 -- scripts/common.sh@352 -- # local d=2 00:12:09.082 14:58:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.082 14:58:32 -- scripts/common.sh@354 -- # echo 2 00:12:09.082 14:58:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:09.082 14:58:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:09.082 14:58:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:09.082 14:58:32 -- scripts/common.sh@367 -- # return 0 00:12:09.082 14:58:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.082 14:58:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:09.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.082 --rc genhtml_branch_coverage=1 00:12:09.082 --rc genhtml_function_coverage=1 00:12:09.082 --rc genhtml_legend=1 00:12:09.082 --rc geninfo_all_blocks=1 00:12:09.082 --rc geninfo_unexecuted_blocks=1 00:12:09.082 00:12:09.082 ' 00:12:09.082 14:58:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:09.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.082 --rc genhtml_branch_coverage=1 00:12:09.082 --rc genhtml_function_coverage=1 00:12:09.082 --rc genhtml_legend=1 00:12:09.082 --rc geninfo_all_blocks=1 00:12:09.082 --rc geninfo_unexecuted_blocks=1 00:12:09.082 00:12:09.082 ' 00:12:09.082 14:58:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:09.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.082 --rc genhtml_branch_coverage=1 00:12:09.082 --rc genhtml_function_coverage=1 00:12:09.082 --rc genhtml_legend=1 00:12:09.082 --rc geninfo_all_blocks=1 00:12:09.082 --rc geninfo_unexecuted_blocks=1 00:12:09.082 00:12:09.082 ' 00:12:09.082 14:58:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:09.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.082 --rc genhtml_branch_coverage=1 00:12:09.083 --rc genhtml_function_coverage=1 00:12:09.083 --rc genhtml_legend=1 00:12:09.083 --rc geninfo_all_blocks=1 00:12:09.083 --rc geninfo_unexecuted_blocks=1 00:12:09.083 00:12:09.083 ' 00:12:09.083 14:58:32 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:09.083 14:58:32 -- nvmf/common.sh@7 -- # uname -s 00:12:09.083 14:58:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.083 14:58:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.083 14:58:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.083 14:58:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.083 14:58:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.083 14:58:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.083 14:58:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.083 14:58:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.083 14:58:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.083 14:58:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.083 14:58:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:09.083 14:58:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:09.083 14:58:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.083 14:58:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.083 14:58:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:09.083 14:58:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.083 14:58:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.083 14:58:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.083 14:58:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.083 14:58:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.083 14:58:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.083 14:58:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.083 14:58:32 -- paths/export.sh@5 -- # export PATH 00:12:09.083 14:58:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.083 14:58:32 -- nvmf/common.sh@46 -- # : 0 00:12:09.083 14:58:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:09.083 14:58:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:09.083 14:58:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:09.083 14:58:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.083 14:58:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.083 14:58:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:09.083 14:58:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:09.083 14:58:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:09.083 14:58:32 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:09.083 14:58:32 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:09.083 14:58:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:09.083 14:58:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.083 14:58:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:09.083 14:58:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:09.083 14:58:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:09.083 14:58:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.083 14:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.083 14:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.083 14:58:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:09.083 14:58:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:09.083 14:58:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:09.083 14:58:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:09.083 14:58:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:09.083 14:58:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:09.083 14:58:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.083 14:58:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.083 14:58:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:09.083 14:58:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:09.083 14:58:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:09.083 14:58:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:09.083 14:58:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:09.083 14:58:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.083 14:58:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:09.083 14:58:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:09.083 14:58:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:09.083 14:58:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:09.083 14:58:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:09.342 14:58:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:09.342 Cannot find device "nvmf_tgt_br" 00:12:09.342 14:58:32 -- nvmf/common.sh@154 -- # true 00:12:09.342 14:58:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.342 Cannot find device "nvmf_tgt_br2" 00:12:09.342 14:58:32 -- nvmf/common.sh@155 -- # true 00:12:09.342 14:58:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:09.342 14:58:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:09.342 Cannot find device "nvmf_tgt_br" 00:12:09.342 14:58:32 -- nvmf/common.sh@157 -- # true 00:12:09.342 14:58:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:09.342 Cannot find device "nvmf_tgt_br2" 00:12:09.342 14:58:32 -- nvmf/common.sh@158 -- # true 00:12:09.342 14:58:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:09.342 14:58:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:09.342 14:58:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.342 14:58:32 -- nvmf/common.sh@161 -- # true 00:12:09.342 14:58:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.342 14:58:32 -- nvmf/common.sh@162 -- # true 00:12:09.342 14:58:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:09.342 14:58:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:09.342 14:58:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:09.342 14:58:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:09.342 14:58:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:09.342 14:58:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:09.342 14:58:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:09.342 14:58:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:09.342 14:58:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:09.342 14:58:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:09.342 14:58:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:09.342 14:58:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:09.342 14:58:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:09.342 14:58:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:09.601 14:58:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:09.601 14:58:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:09.601 14:58:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:09.601 14:58:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:09.601 14:58:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:09.601 14:58:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:09.601 14:58:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:09.601 14:58:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:09.601 14:58:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:09.601 14:58:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:09.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:12:09.601 00:12:09.601 --- 10.0.0.2 ping statistics --- 00:12:09.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.601 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:09.601 14:58:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:09.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:09.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:12:09.601 00:12:09.601 --- 10.0.0.3 ping statistics --- 00:12:09.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.601 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:09.601 14:58:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:09.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:09.601 00:12:09.601 --- 10.0.0.1 ping statistics --- 00:12:09.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.601 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:09.601 14:58:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.601 14:58:33 -- nvmf/common.sh@421 -- # return 0 00:12:09.601 14:58:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:09.601 14:58:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.601 14:58:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:09.601 14:58:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:09.601 14:58:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.601 14:58:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:09.601 14:58:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:09.601 14:58:33 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:09.601 14:58:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:09.601 14:58:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:09.601 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:12:09.601 14:58:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.601 14:58:33 -- nvmf/common.sh@469 -- # nvmfpid=77856 00:12:09.601 14:58:33 -- nvmf/common.sh@470 -- # waitforlisten 77856 00:12:09.601 14:58:33 -- common/autotest_common.sh@829 -- # '[' -z 77856 ']' 00:12:09.601 14:58:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.601 14:58:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.601 14:58:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.601 14:58:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.601 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:12:09.601 [2024-11-18 14:58:33.081729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:09.601 [2024-11-18 14:58:33.081802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.860 [2024-11-18 14:58:33.211423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.860 [2024-11-18 14:58:33.270637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:09.860 [2024-11-18 14:58:33.270830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.860 [2024-11-18 14:58:33.270846] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.860 [2024-11-18 14:58:33.270853] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.860 [2024-11-18 14:58:33.271032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.860 [2024-11-18 14:58:33.271168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.860 [2024-11-18 14:58:33.271907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.860 [2024-11-18 14:58:33.271962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.802 14:58:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.802 14:58:34 -- common/autotest_common.sh@862 -- # return 0 00:12:10.802 14:58:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:10.802 14:58:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:10.802 14:58:34 -- common/autotest_common.sh@10 -- # set +x 00:12:10.802 14:58:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.802 14:58:34 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:10.802 14:58:34 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.802 14:58:34 -- target/multitarget.sh@21 -- # jq length 00:12:10.802 14:58:34 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:10.802 14:58:34 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:11.090 "nvmf_tgt_1" 00:12:11.090 14:58:34 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:11.090 "nvmf_tgt_2" 00:12:11.090 14:58:34 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.090 14:58:34 -- target/multitarget.sh@28 -- # jq length 00:12:11.090 14:58:34 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:11.090 14:58:34 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:11.378 true 00:12:11.378 14:58:34 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:11.378 true 00:12:11.378 14:58:34 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.378 14:58:34 -- target/multitarget.sh@35 -- # jq length 00:12:11.649 14:58:35 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:11.649 14:58:35 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:11.649 14:58:35 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:11.649 14:58:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:11.649 14:58:35 -- nvmf/common.sh@116 -- # sync 00:12:11.649 14:58:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:11.649 14:58:35 -- nvmf/common.sh@119 -- # set +e 00:12:11.649 14:58:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:11.649 14:58:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:11.649 rmmod nvme_tcp 00:12:11.649 rmmod nvme_fabrics 00:12:11.649 rmmod nvme_keyring 00:12:11.649 14:58:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:11.649 14:58:35 -- nvmf/common.sh@123 -- # set -e 00:12:11.649 14:58:35 -- nvmf/common.sh@124 -- # return 0 00:12:11.649 14:58:35 -- nvmf/common.sh@477 -- # '[' -n 77856 ']' 00:12:11.649 14:58:35 -- nvmf/common.sh@478 -- # killprocess 77856 00:12:11.649 14:58:35 -- common/autotest_common.sh@936 -- # '[' -z 77856 ']' 00:12:11.649 14:58:35 -- common/autotest_common.sh@940 -- # kill -0 77856 00:12:11.649 14:58:35 -- common/autotest_common.sh@941 -- # uname 00:12:11.649 14:58:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:11.649 14:58:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77856 00:12:11.649 14:58:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:11.649 14:58:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:11.649 14:58:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77856' 00:12:11.649 killing process with pid 77856 00:12:11.649 14:58:35 -- common/autotest_common.sh@955 -- # kill 77856 00:12:11.649 14:58:35 -- common/autotest_common.sh@960 -- # wait 77856 00:12:11.908 14:58:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:11.908 14:58:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:11.908 14:58:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:11.908 14:58:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.908 14:58:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:11.908 14:58:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.908 14:58:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.908 14:58:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.908 14:58:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:11.908 00:12:11.908 real 0m2.981s 00:12:11.908 user 0m9.885s 00:12:11.908 sys 0m0.694s 00:12:11.908 14:58:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:11.908 14:58:35 -- common/autotest_common.sh@10 -- # set +x 00:12:11.908 ************************************ 00:12:11.908 END TEST nvmf_multitarget 00:12:11.908 ************************************ 00:12:12.168 14:58:35 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.168 14:58:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:12.168 14:58:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:12.168 14:58:35 -- common/autotest_common.sh@10 -- # set +x 00:12:12.168 ************************************ 00:12:12.168 START TEST nvmf_rpc 00:12:12.168 ************************************ 00:12:12.168 14:58:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.168 * Looking for test storage... 00:12:12.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.168 14:58:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:12.168 14:58:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:12.168 14:58:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:12.168 14:58:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:12.168 14:58:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:12.168 14:58:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:12.168 14:58:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:12.168 14:58:35 -- scripts/common.sh@335 -- # IFS=.-: 00:12:12.168 14:58:35 -- scripts/common.sh@335 -- # read -ra ver1 00:12:12.168 14:58:35 -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.168 14:58:35 -- scripts/common.sh@336 -- # read -ra ver2 00:12:12.168 14:58:35 -- scripts/common.sh@337 -- # local 'op=<' 00:12:12.168 14:58:35 -- scripts/common.sh@339 -- # ver1_l=2 00:12:12.168 14:58:35 -- scripts/common.sh@340 -- # ver2_l=1 00:12:12.168 14:58:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:12.168 14:58:35 -- scripts/common.sh@343 -- # case "$op" in 00:12:12.168 14:58:35 -- scripts/common.sh@344 -- # : 1 00:12:12.168 14:58:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:12.168 14:58:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.168 14:58:35 -- scripts/common.sh@364 -- # decimal 1 00:12:12.168 14:58:35 -- scripts/common.sh@352 -- # local d=1 00:12:12.168 14:58:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.168 14:58:35 -- scripts/common.sh@354 -- # echo 1 00:12:12.168 14:58:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:12.168 14:58:35 -- scripts/common.sh@365 -- # decimal 2 00:12:12.168 14:58:35 -- scripts/common.sh@352 -- # local d=2 00:12:12.168 14:58:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.168 14:58:35 -- scripts/common.sh@354 -- # echo 2 00:12:12.168 14:58:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:12.168 14:58:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:12.168 14:58:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:12.168 14:58:35 -- scripts/common.sh@367 -- # return 0 00:12:12.168 14:58:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.168 14:58:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.168 --rc genhtml_branch_coverage=1 00:12:12.168 --rc genhtml_function_coverage=1 00:12:12.168 --rc genhtml_legend=1 00:12:12.168 --rc geninfo_all_blocks=1 00:12:12.168 --rc geninfo_unexecuted_blocks=1 00:12:12.168 00:12:12.168 ' 00:12:12.168 14:58:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.168 --rc genhtml_branch_coverage=1 00:12:12.168 --rc genhtml_function_coverage=1 00:12:12.168 --rc genhtml_legend=1 00:12:12.168 --rc geninfo_all_blocks=1 00:12:12.168 --rc geninfo_unexecuted_blocks=1 00:12:12.168 00:12:12.168 ' 00:12:12.168 14:58:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.168 --rc genhtml_branch_coverage=1 00:12:12.168 --rc genhtml_function_coverage=1 00:12:12.168 --rc genhtml_legend=1 00:12:12.168 --rc geninfo_all_blocks=1 00:12:12.168 --rc geninfo_unexecuted_blocks=1 00:12:12.168 00:12:12.168 ' 00:12:12.168 14:58:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:12.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.168 --rc genhtml_branch_coverage=1 00:12:12.168 --rc genhtml_function_coverage=1 00:12:12.168 --rc genhtml_legend=1 00:12:12.168 --rc geninfo_all_blocks=1 00:12:12.168 --rc geninfo_unexecuted_blocks=1 00:12:12.168 00:12:12.168 ' 00:12:12.168 14:58:35 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:12.168 14:58:35 -- nvmf/common.sh@7 -- # uname -s 00:12:12.168 14:58:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.168 14:58:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.169 14:58:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.169 14:58:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.169 14:58:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.169 14:58:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.169 14:58:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.169 14:58:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.169 14:58:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.169 14:58:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.169 14:58:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:12.169 14:58:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:12.169 14:58:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.169 14:58:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.169 14:58:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:12.169 14:58:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.169 14:58:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.169 14:58:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.169 14:58:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.169 14:58:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.169 14:58:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.169 14:58:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.169 14:58:35 -- paths/export.sh@5 -- # export PATH 00:12:12.169 14:58:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.169 14:58:35 -- nvmf/common.sh@46 -- # : 0 00:12:12.169 14:58:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:12.169 14:58:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:12.169 14:58:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:12.169 14:58:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.169 14:58:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.169 14:58:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:12.169 14:58:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:12.169 14:58:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:12.169 14:58:35 -- target/rpc.sh@11 -- # loops=5 00:12:12.169 14:58:35 -- target/rpc.sh@23 -- # nvmftestinit 00:12:12.169 14:58:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:12.169 14:58:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.169 14:58:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:12.169 14:58:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:12.169 14:58:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:12.169 14:58:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.169 14:58:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.169 14:58:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.169 14:58:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:12.169 14:58:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:12.169 14:58:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:12.169 14:58:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:12.169 14:58:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:12.169 14:58:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:12.169 14:58:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.169 14:58:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.169 14:58:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:12.169 14:58:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:12.169 14:58:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:12.169 14:58:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:12.169 14:58:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:12.169 14:58:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.169 14:58:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:12.169 14:58:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:12.169 14:58:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:12.169 14:58:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:12.169 14:58:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:12.169 14:58:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:12.169 Cannot find device "nvmf_tgt_br" 00:12:12.169 14:58:35 -- nvmf/common.sh@154 -- # true 00:12:12.169 14:58:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.169 Cannot find device "nvmf_tgt_br2" 00:12:12.169 14:58:35 -- nvmf/common.sh@155 -- # true 00:12:12.169 14:58:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:12.169 14:58:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:12.169 Cannot find device "nvmf_tgt_br" 00:12:12.169 14:58:35 -- nvmf/common.sh@157 -- # true 00:12:12.169 14:58:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:12.428 Cannot find device "nvmf_tgt_br2" 00:12:12.428 14:58:35 -- nvmf/common.sh@158 -- # true 00:12:12.428 14:58:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:12.428 14:58:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:12.428 14:58:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.428 14:58:35 -- nvmf/common.sh@161 -- # true 00:12:12.428 14:58:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.428 14:58:35 -- nvmf/common.sh@162 -- # true 00:12:12.428 14:58:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:12.428 14:58:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:12.428 14:58:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:12.428 14:58:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:12.428 14:58:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:12.428 14:58:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:12.428 14:58:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:12.428 14:58:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:12.428 14:58:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:12.428 14:58:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:12.428 14:58:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:12.428 14:58:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:12.428 14:58:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:12.428 14:58:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:12.428 14:58:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:12.428 14:58:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:12.428 14:58:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:12.428 14:58:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:12.428 14:58:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:12.428 14:58:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:12.428 14:58:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:12.428 14:58:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:12.428 14:58:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:12.428 14:58:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:12.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:12.428 00:12:12.428 --- 10.0.0.2 ping statistics --- 00:12:12.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.428 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:12.428 14:58:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:12.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:12.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:12.428 00:12:12.428 --- 10.0.0.3 ping statistics --- 00:12:12.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.429 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:12.429 14:58:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:12.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:12:12.429 00:12:12.429 --- 10.0.0.1 ping statistics --- 00:12:12.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.429 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:12.687 14:58:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.687 14:58:36 -- nvmf/common.sh@421 -- # return 0 00:12:12.687 14:58:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:12.687 14:58:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.687 14:58:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:12.687 14:58:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:12.687 14:58:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.687 14:58:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:12.687 14:58:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:12.687 14:58:36 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:12.687 14:58:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:12.687 14:58:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:12.687 14:58:36 -- common/autotest_common.sh@10 -- # set +x 00:12:12.687 14:58:36 -- nvmf/common.sh@469 -- # nvmfpid=78084 00:12:12.687 14:58:36 -- nvmf/common.sh@470 -- # waitforlisten 78084 00:12:12.687 14:58:36 -- common/autotest_common.sh@829 -- # '[' -z 78084 ']' 00:12:12.687 14:58:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.687 14:58:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.687 14:58:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.687 14:58:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.687 14:58:36 -- common/autotest_common.sh@10 -- # set +x 00:12:12.687 14:58:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.687 [2024-11-18 14:58:36.094656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:12.687 [2024-11-18 14:58:36.094782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.687 [2024-11-18 14:58:36.233554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.946 [2024-11-18 14:58:36.301100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:12.946 [2024-11-18 14:58:36.301226] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.946 [2024-11-18 14:58:36.301237] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.946 [2024-11-18 14:58:36.301245] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.946 [2024-11-18 14:58:36.301408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.946 [2024-11-18 14:58:36.301646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.946 [2024-11-18 14:58:36.302386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.946 [2024-11-18 14:58:36.302394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.514 14:58:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.514 14:58:37 -- common/autotest_common.sh@862 -- # return 0 00:12:13.514 14:58:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:13.514 14:58:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:13.514 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:13.514 14:58:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.514 14:58:37 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:13.514 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.514 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:13.773 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.773 14:58:37 -- target/rpc.sh@26 -- # stats='{ 00:12:13.773 "poll_groups": [ 00:12:13.773 { 00:12:13.773 "admin_qpairs": 0, 00:12:13.773 "completed_nvme_io": 0, 00:12:13.773 "current_admin_qpairs": 0, 00:12:13.773 "current_io_qpairs": 0, 00:12:13.773 "io_qpairs": 0, 00:12:13.773 "name": "nvmf_tgt_poll_group_0", 00:12:13.773 "pending_bdev_io": 0, 00:12:13.773 "transports": [] 00:12:13.773 }, 00:12:13.773 { 00:12:13.773 "admin_qpairs": 0, 00:12:13.773 "completed_nvme_io": 0, 00:12:13.773 "current_admin_qpairs": 0, 00:12:13.773 "current_io_qpairs": 0, 00:12:13.773 "io_qpairs": 0, 00:12:13.773 "name": "nvmf_tgt_poll_group_1", 00:12:13.773 "pending_bdev_io": 0, 00:12:13.773 "transports": [] 00:12:13.773 }, 00:12:13.773 { 00:12:13.773 "admin_qpairs": 0, 00:12:13.773 "completed_nvme_io": 0, 00:12:13.773 "current_admin_qpairs": 0, 00:12:13.773 "current_io_qpairs": 0, 00:12:13.773 "io_qpairs": 0, 00:12:13.773 "name": "nvmf_tgt_poll_group_2", 00:12:13.773 "pending_bdev_io": 0, 00:12:13.773 "transports": [] 00:12:13.773 }, 00:12:13.773 { 00:12:13.773 "admin_qpairs": 0, 00:12:13.773 "completed_nvme_io": 0, 00:12:13.773 "current_admin_qpairs": 0, 00:12:13.773 "current_io_qpairs": 0, 00:12:13.773 "io_qpairs": 0, 00:12:13.773 "name": "nvmf_tgt_poll_group_3", 00:12:13.773 "pending_bdev_io": 0, 00:12:13.773 "transports": [] 00:12:13.773 } 00:12:13.773 ], 00:12:13.773 "tick_rate": 2200000000 00:12:13.773 }' 00:12:13.773 14:58:37 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:13.773 14:58:37 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:13.773 14:58:37 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:13.773 14:58:37 -- target/rpc.sh@15 -- # wc -l 00:12:13.773 14:58:37 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:13.773 14:58:37 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:13.773 14:58:37 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:13.773 14:58:37 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.773 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.773 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:13.773 [2024-11-18 14:58:37.221437] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.773 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.773 14:58:37 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:13.773 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.773 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:13.773 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.773 14:58:37 -- target/rpc.sh@33 -- # stats='{ 00:12:13.773 "poll_groups": [ 00:12:13.773 { 00:12:13.773 "admin_qpairs": 0, 00:12:13.773 "completed_nvme_io": 0, 00:12:13.773 "current_admin_qpairs": 0, 00:12:13.773 "current_io_qpairs": 0, 00:12:13.773 "io_qpairs": 0, 00:12:13.773 "name": "nvmf_tgt_poll_group_0", 00:12:13.773 "pending_bdev_io": 0, 00:12:13.773 "transports": [ 00:12:13.773 { 00:12:13.773 "trtype": "TCP" 00:12:13.773 } 00:12:13.773 ] 00:12:13.773 }, 00:12:13.773 { 00:12:13.773 "admin_qpairs": 0, 00:12:13.773 "completed_nvme_io": 0, 00:12:13.773 "current_admin_qpairs": 0, 00:12:13.773 "current_io_qpairs": 0, 00:12:13.773 "io_qpairs": 0, 00:12:13.773 "name": "nvmf_tgt_poll_group_1", 00:12:13.773 "pending_bdev_io": 0, 00:12:13.773 "transports": [ 00:12:13.773 { 00:12:13.773 "trtype": "TCP" 00:12:13.773 } 00:12:13.773 ] 00:12:13.773 }, 00:12:13.773 { 00:12:13.773 "admin_qpairs": 0, 00:12:13.773 "completed_nvme_io": 0, 00:12:13.773 "current_admin_qpairs": 0, 00:12:13.773 "current_io_qpairs": 0, 00:12:13.773 "io_qpairs": 0, 00:12:13.773 "name": "nvmf_tgt_poll_group_2", 00:12:13.773 "pending_bdev_io": 0, 00:12:13.773 "transports": [ 00:12:13.773 { 00:12:13.773 "trtype": "TCP" 00:12:13.773 } 00:12:13.773 ] 00:12:13.773 }, 00:12:13.773 { 00:12:13.773 "admin_qpairs": 0, 00:12:13.773 "completed_nvme_io": 0, 00:12:13.773 "current_admin_qpairs": 0, 00:12:13.773 "current_io_qpairs": 0, 00:12:13.773 "io_qpairs": 0, 00:12:13.773 "name": "nvmf_tgt_poll_group_3", 00:12:13.773 "pending_bdev_io": 0, 00:12:13.773 "transports": [ 00:12:13.773 { 00:12:13.773 "trtype": "TCP" 00:12:13.773 } 00:12:13.773 ] 00:12:13.773 } 00:12:13.773 ], 00:12:13.773 "tick_rate": 2200000000 00:12:13.773 }' 00:12:13.773 14:58:37 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:13.773 14:58:37 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:13.773 14:58:37 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:13.773 14:58:37 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:13.773 14:58:37 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:13.773 14:58:37 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:13.773 14:58:37 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:13.773 14:58:37 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:13.773 14:58:37 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.032 14:58:37 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:14.032 14:58:37 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:14.032 14:58:37 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:14.032 14:58:37 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:14.032 14:58:37 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:14.032 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.032 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:14.032 Malloc1 00:12:14.032 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.032 14:58:37 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.032 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.032 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:14.032 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.032 14:58:37 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.032 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.032 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:14.032 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.032 14:58:37 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:14.032 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.032 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:14.032 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.032 14:58:37 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.032 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.032 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:14.032 [2024-11-18 14:58:37.429494] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.032 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.032 14:58:37 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd -a 10.0.0.2 -s 4420 00:12:14.032 14:58:37 -- common/autotest_common.sh@650 -- # local es=0 00:12:14.033 14:58:37 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd -a 10.0.0.2 -s 4420 00:12:14.033 14:58:37 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:14.033 14:58:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:14.033 14:58:37 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:14.033 14:58:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:14.033 14:58:37 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:14.033 14:58:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:14.033 14:58:37 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:14.033 14:58:37 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:14.033 14:58:37 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd -a 10.0.0.2 -s 4420 00:12:14.033 [2024-11-18 14:58:37.457806] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd' 00:12:14.033 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:14.033 could not add new controller: failed to write to nvme-fabrics device 00:12:14.033 14:58:37 -- common/autotest_common.sh@653 -- # es=1 00:12:14.033 14:58:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:14.033 14:58:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:14.033 14:58:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:14.033 14:58:37 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:14.033 14:58:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.033 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:12:14.033 14:58:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.033 14:58:37 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.291 14:58:37 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.291 14:58:37 -- common/autotest_common.sh@1187 -- # local i=0 00:12:14.291 14:58:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.291 14:58:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:14.291 14:58:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:16.196 14:58:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:16.196 14:58:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:16.196 14:58:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.196 14:58:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:16.196 14:58:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.196 14:58:39 -- common/autotest_common.sh@1197 -- # return 0 00:12:16.196 14:58:39 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.196 14:58:39 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.196 14:58:39 -- common/autotest_common.sh@1208 -- # local i=0 00:12:16.196 14:58:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:16.196 14:58:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.196 14:58:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.196 14:58:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:16.196 14:58:39 -- common/autotest_common.sh@1220 -- # return 0 00:12:16.196 14:58:39 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:16.196 14:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.196 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:12:16.196 14:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.196 14:58:39 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.196 14:58:39 -- common/autotest_common.sh@650 -- # local es=0 00:12:16.196 14:58:39 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.196 14:58:39 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:16.196 14:58:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.196 14:58:39 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:16.196 14:58:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.196 14:58:39 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:16.196 14:58:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.196 14:58:39 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:16.196 14:58:39 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:16.196 14:58:39 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.196 [2024-11-18 14:58:39.759310] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd' 00:12:16.196 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:16.196 could not add new controller: failed to write to nvme-fabrics device 00:12:16.196 14:58:39 -- common/autotest_common.sh@653 -- # es=1 00:12:16.196 14:58:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:16.196 14:58:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:16.196 14:58:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:16.196 14:58:39 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:16.196 14:58:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.196 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:12:16.196 14:58:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.196 14:58:39 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.455 14:58:39 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.455 14:58:39 -- common/autotest_common.sh@1187 -- # local i=0 00:12:16.455 14:58:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.455 14:58:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:16.455 14:58:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:18.988 14:58:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:18.988 14:58:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:18.988 14:58:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.988 14:58:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:18.988 14:58:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.988 14:58:41 -- common/autotest_common.sh@1197 -- # return 0 00:12:18.988 14:58:41 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.988 14:58:42 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.988 14:58:42 -- common/autotest_common.sh@1208 -- # local i=0 00:12:18.988 14:58:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:18.988 14:58:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.988 14:58:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:18.988 14:58:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.988 14:58:42 -- common/autotest_common.sh@1220 -- # return 0 00:12:18.988 14:58:42 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.988 14:58:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.988 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:18.988 14:58:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.988 14:58:42 -- target/rpc.sh@81 -- # seq 1 5 00:12:18.988 14:58:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:18.988 14:58:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:18.988 14:58:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.988 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:18.988 14:58:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.988 14:58:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.988 14:58:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.988 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:18.988 [2024-11-18 14:58:42.171048] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.988 14:58:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.988 14:58:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:18.988 14:58:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.988 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:18.988 14:58:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.988 14:58:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:18.988 14:58:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.988 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:18.988 14:58:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.989 14:58:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.989 14:58:42 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.989 14:58:42 -- common/autotest_common.sh@1187 -- # local i=0 00:12:18.989 14:58:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.989 14:58:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:18.989 14:58:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:20.893 14:58:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:20.893 14:58:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:20.893 14:58:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.893 14:58:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:20.893 14:58:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.893 14:58:44 -- common/autotest_common.sh@1197 -- # return 0 00:12:20.893 14:58:44 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.153 14:58:44 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.153 14:58:44 -- common/autotest_common.sh@1208 -- # local i=0 00:12:21.153 14:58:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:21.153 14:58:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.153 14:58:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:21.153 14:58:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.153 14:58:44 -- common/autotest_common.sh@1220 -- # return 0 00:12:21.153 14:58:44 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.153 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.153 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.153 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.153 14:58:44 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.153 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.153 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.153 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.153 14:58:44 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:21.153 14:58:44 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.153 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.153 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.153 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.153 14:58:44 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.153 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.153 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.153 [2024-11-18 14:58:44.587375] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.153 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.153 14:58:44 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:21.153 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.153 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.153 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.153 14:58:44 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.153 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.153 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:21.153 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.153 14:58:44 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.413 14:58:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.413 14:58:44 -- common/autotest_common.sh@1187 -- # local i=0 00:12:21.413 14:58:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.413 14:58:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:21.413 14:58:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:23.317 14:58:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:23.317 14:58:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:23.317 14:58:46 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.317 14:58:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:23.317 14:58:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.317 14:58:46 -- common/autotest_common.sh@1197 -- # return 0 00:12:23.317 14:58:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.577 14:58:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.577 14:58:46 -- common/autotest_common.sh@1208 -- # local i=0 00:12:23.577 14:58:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:23.577 14:58:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.577 14:58:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:23.577 14:58:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.577 14:58:46 -- common/autotest_common.sh@1220 -- # return 0 00:12:23.577 14:58:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.577 14:58:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.577 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.577 14:58:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.577 14:58:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.577 14:58:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.577 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.577 14:58:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.577 14:58:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.577 14:58:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.577 14:58:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.577 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.577 14:58:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.577 14:58:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.577 14:58:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.577 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.577 [2024-11-18 14:58:46.991861] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.577 14:58:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.577 14:58:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.577 14:58:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.577 14:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:23.577 14:58:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.577 14:58:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.577 14:58:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.577 14:58:47 -- common/autotest_common.sh@10 -- # set +x 00:12:23.577 14:58:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.577 14:58:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.836 14:58:47 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.836 14:58:47 -- common/autotest_common.sh@1187 -- # local i=0 00:12:23.836 14:58:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.836 14:58:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:23.836 14:58:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:25.748 14:58:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:25.748 14:58:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:25.748 14:58:49 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.748 14:58:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:25.748 14:58:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.748 14:58:49 -- common/autotest_common.sh@1197 -- # return 0 00:12:25.748 14:58:49 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.748 14:58:49 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.748 14:58:49 -- common/autotest_common.sh@1208 -- # local i=0 00:12:25.748 14:58:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.748 14:58:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:25.748 14:58:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.748 14:58:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:25.748 14:58:49 -- common/autotest_common.sh@1220 -- # return 0 00:12:25.748 14:58:49 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.748 14:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.748 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:25.748 14:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.748 14:58:49 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.748 14:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.748 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:25.748 14:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.748 14:58:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:25.748 14:58:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.748 14:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.748 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:25.748 14:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.748 14:58:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.748 14:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.748 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:25.748 [2024-11-18 14:58:49.304301] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.748 14:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.748 14:58:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:25.748 14:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.748 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:25.748 14:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.748 14:58:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.748 14:58:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.748 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:12:25.748 14:58:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.748 14:58:49 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.010 14:58:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.010 14:58:49 -- common/autotest_common.sh@1187 -- # local i=0 00:12:26.011 14:58:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.011 14:58:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:26.011 14:58:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:28.545 14:58:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:28.545 14:58:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:28.545 14:58:51 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.545 14:58:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:28.545 14:58:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.545 14:58:51 -- common/autotest_common.sh@1197 -- # return 0 00:12:28.545 14:58:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.545 14:58:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.545 14:58:51 -- common/autotest_common.sh@1208 -- # local i=0 00:12:28.545 14:58:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:28.545 14:58:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.545 14:58:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:28.545 14:58:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.545 14:58:51 -- common/autotest_common.sh@1220 -- # return 0 00:12:28.545 14:58:51 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.545 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.545 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.545 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.545 14:58:51 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.545 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.545 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.545 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.545 14:58:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:28.545 14:58:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:28.545 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.545 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.545 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.545 14:58:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.545 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.545 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.545 [2024-11-18 14:58:51.724805] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.545 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.545 14:58:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:28.545 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.545 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.545 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.545 14:58:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:28.545 14:58:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.545 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.545 14:58:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.545 14:58:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.545 14:58:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.545 14:58:51 -- common/autotest_common.sh@1187 -- # local i=0 00:12:28.545 14:58:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.545 14:58:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:28.545 14:58:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:30.449 14:58:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:30.449 14:58:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:30.449 14:58:53 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.449 14:58:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:30.449 14:58:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.449 14:58:53 -- common/autotest_common.sh@1197 -- # return 0 00:12:30.449 14:58:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.449 14:58:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.449 14:58:53 -- common/autotest_common.sh@1208 -- # local i=0 00:12:30.449 14:58:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:30.449 14:58:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.449 14:58:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:30.449 14:58:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.449 14:58:54 -- common/autotest_common.sh@1220 -- # return 0 00:12:30.449 14:58:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.449 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.449 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.449 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.449 14:58:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.449 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.449 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.449 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.449 14:58:54 -- target/rpc.sh@99 -- # seq 1 5 00:12:30.708 14:58:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:30.708 14:58:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 [2024-11-18 14:58:54.048920] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:30.708 14:58:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 [2024-11-18 14:58:54.097014] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:30.708 14:58:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 [2024-11-18 14:58:54.149109] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.708 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.708 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.708 14:58:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.708 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:30.709 14:58:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 [2024-11-18 14:58:54.197161] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:30.709 14:58:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 [2024-11-18 14:58:54.245239] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.709 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.709 14:58:54 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:30.709 14:58:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.709 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:30.968 14:58:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.968 14:58:54 -- target/rpc.sh@110 -- # stats='{ 00:12:30.968 "poll_groups": [ 00:12:30.968 { 00:12:30.968 "admin_qpairs": 2, 00:12:30.968 "completed_nvme_io": 67, 00:12:30.968 "current_admin_qpairs": 0, 00:12:30.968 "current_io_qpairs": 0, 00:12:30.968 "io_qpairs": 16, 00:12:30.968 "name": "nvmf_tgt_poll_group_0", 00:12:30.968 "pending_bdev_io": 0, 00:12:30.968 "transports": [ 00:12:30.968 { 00:12:30.968 "trtype": "TCP" 00:12:30.968 } 00:12:30.968 ] 00:12:30.968 }, 00:12:30.968 { 00:12:30.968 "admin_qpairs": 3, 00:12:30.968 "completed_nvme_io": 116, 00:12:30.968 "current_admin_qpairs": 0, 00:12:30.968 "current_io_qpairs": 0, 00:12:30.968 "io_qpairs": 17, 00:12:30.968 "name": "nvmf_tgt_poll_group_1", 00:12:30.968 "pending_bdev_io": 0, 00:12:30.968 "transports": [ 00:12:30.968 { 00:12:30.968 "trtype": "TCP" 00:12:30.968 } 00:12:30.968 ] 00:12:30.968 }, 00:12:30.968 { 00:12:30.968 "admin_qpairs": 1, 00:12:30.968 "completed_nvme_io": 168, 00:12:30.968 "current_admin_qpairs": 0, 00:12:30.968 "current_io_qpairs": 0, 00:12:30.968 "io_qpairs": 19, 00:12:30.968 "name": "nvmf_tgt_poll_group_2", 00:12:30.968 "pending_bdev_io": 0, 00:12:30.968 "transports": [ 00:12:30.968 { 00:12:30.968 "trtype": "TCP" 00:12:30.968 } 00:12:30.968 ] 00:12:30.968 }, 00:12:30.968 { 00:12:30.968 "admin_qpairs": 1, 00:12:30.968 "completed_nvme_io": 69, 00:12:30.968 "current_admin_qpairs": 0, 00:12:30.968 "current_io_qpairs": 0, 00:12:30.968 "io_qpairs": 18, 00:12:30.968 "name": "nvmf_tgt_poll_group_3", 00:12:30.968 "pending_bdev_io": 0, 00:12:30.968 "transports": [ 00:12:30.968 { 00:12:30.968 "trtype": "TCP" 00:12:30.968 } 00:12:30.968 ] 00:12:30.968 } 00:12:30.968 ], 00:12:30.968 "tick_rate": 2200000000 00:12:30.968 }' 00:12:30.968 14:58:54 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:30.968 14:58:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:30.968 14:58:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:30.968 14:58:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.968 14:58:54 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:30.968 14:58:54 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:30.968 14:58:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:30.968 14:58:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.968 14:58:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:30.968 14:58:54 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:30.968 14:58:54 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:30.968 14:58:54 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:30.968 14:58:54 -- target/rpc.sh@123 -- # nvmftestfini 00:12:30.968 14:58:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:30.968 14:58:54 -- nvmf/common.sh@116 -- # sync 00:12:30.968 14:58:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.968 14:58:54 -- nvmf/common.sh@119 -- # set +e 00:12:30.968 14:58:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.968 14:58:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.968 rmmod nvme_tcp 00:12:30.968 rmmod nvme_fabrics 00:12:30.968 rmmod nvme_keyring 00:12:30.968 14:58:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.968 14:58:54 -- nvmf/common.sh@123 -- # set -e 00:12:30.968 14:58:54 -- nvmf/common.sh@124 -- # return 0 00:12:30.968 14:58:54 -- nvmf/common.sh@477 -- # '[' -n 78084 ']' 00:12:30.968 14:58:54 -- nvmf/common.sh@478 -- # killprocess 78084 00:12:30.969 14:58:54 -- common/autotest_common.sh@936 -- # '[' -z 78084 ']' 00:12:30.969 14:58:54 -- common/autotest_common.sh@940 -- # kill -0 78084 00:12:30.969 14:58:54 -- common/autotest_common.sh@941 -- # uname 00:12:30.969 14:58:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.969 14:58:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78084 00:12:30.969 killing process with pid 78084 00:12:30.969 14:58:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.969 14:58:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.969 14:58:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78084' 00:12:30.969 14:58:54 -- common/autotest_common.sh@955 -- # kill 78084 00:12:30.969 14:58:54 -- common/autotest_common.sh@960 -- # wait 78084 00:12:31.227 14:58:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:31.227 14:58:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:31.227 14:58:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:31.227 14:58:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.227 14:58:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:31.227 14:58:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.227 14:58:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.227 14:58:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.227 14:58:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:31.227 ************************************ 00:12:31.227 END TEST nvmf_rpc 00:12:31.227 ************************************ 00:12:31.227 00:12:31.227 real 0m19.293s 00:12:31.227 user 1m13.232s 00:12:31.227 sys 0m2.008s 00:12:31.227 14:58:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.227 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:31.487 14:58:54 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:31.487 14:58:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.487 14:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.487 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:12:31.487 ************************************ 00:12:31.487 START TEST nvmf_invalid 00:12:31.487 ************************************ 00:12:31.487 14:58:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:31.487 * Looking for test storage... 00:12:31.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.487 14:58:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.487 14:58:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.487 14:58:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.487 14:58:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.487 14:58:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.487 14:58:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.487 14:58:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.487 14:58:54 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.487 14:58:54 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.487 14:58:54 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.487 14:58:54 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.487 14:58:54 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.487 14:58:54 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.487 14:58:54 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.487 14:58:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.487 14:58:54 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.487 14:58:54 -- scripts/common.sh@344 -- # : 1 00:12:31.487 14:58:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.487 14:58:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.487 14:58:54 -- scripts/common.sh@364 -- # decimal 1 00:12:31.487 14:58:55 -- scripts/common.sh@352 -- # local d=1 00:12:31.487 14:58:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.487 14:58:55 -- scripts/common.sh@354 -- # echo 1 00:12:31.487 14:58:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.487 14:58:55 -- scripts/common.sh@365 -- # decimal 2 00:12:31.487 14:58:55 -- scripts/common.sh@352 -- # local d=2 00:12:31.487 14:58:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.487 14:58:55 -- scripts/common.sh@354 -- # echo 2 00:12:31.487 14:58:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.487 14:58:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.487 14:58:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.487 14:58:55 -- scripts/common.sh@367 -- # return 0 00:12:31.487 14:58:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.487 14:58:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.487 --rc genhtml_branch_coverage=1 00:12:31.487 --rc genhtml_function_coverage=1 00:12:31.487 --rc genhtml_legend=1 00:12:31.487 --rc geninfo_all_blocks=1 00:12:31.487 --rc geninfo_unexecuted_blocks=1 00:12:31.487 00:12:31.487 ' 00:12:31.487 14:58:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.487 --rc genhtml_branch_coverage=1 00:12:31.487 --rc genhtml_function_coverage=1 00:12:31.487 --rc genhtml_legend=1 00:12:31.487 --rc geninfo_all_blocks=1 00:12:31.487 --rc geninfo_unexecuted_blocks=1 00:12:31.487 00:12:31.487 ' 00:12:31.487 14:58:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.487 --rc genhtml_branch_coverage=1 00:12:31.487 --rc genhtml_function_coverage=1 00:12:31.487 --rc genhtml_legend=1 00:12:31.487 --rc geninfo_all_blocks=1 00:12:31.487 --rc geninfo_unexecuted_blocks=1 00:12:31.487 00:12:31.487 ' 00:12:31.487 14:58:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.487 --rc genhtml_branch_coverage=1 00:12:31.487 --rc genhtml_function_coverage=1 00:12:31.487 --rc genhtml_legend=1 00:12:31.487 --rc geninfo_all_blocks=1 00:12:31.487 --rc geninfo_unexecuted_blocks=1 00:12:31.487 00:12:31.487 ' 00:12:31.487 14:58:55 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.487 14:58:55 -- nvmf/common.sh@7 -- # uname -s 00:12:31.487 14:58:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.487 14:58:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.487 14:58:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.487 14:58:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.487 14:58:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.487 14:58:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.487 14:58:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.487 14:58:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.487 14:58:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.487 14:58:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.487 14:58:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:31.487 14:58:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:31.487 14:58:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.487 14:58:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.487 14:58:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.487 14:58:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.487 14:58:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.487 14:58:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.487 14:58:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.487 14:58:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.487 14:58:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.487 14:58:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.487 14:58:55 -- paths/export.sh@5 -- # export PATH 00:12:31.487 14:58:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.487 14:58:55 -- nvmf/common.sh@46 -- # : 0 00:12:31.487 14:58:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:31.487 14:58:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:31.487 14:58:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:31.487 14:58:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.487 14:58:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.487 14:58:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:31.487 14:58:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:31.487 14:58:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:31.487 14:58:55 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:31.487 14:58:55 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.487 14:58:55 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:31.487 14:58:55 -- target/invalid.sh@14 -- # target=foobar 00:12:31.487 14:58:55 -- target/invalid.sh@16 -- # RANDOM=0 00:12:31.487 14:58:55 -- target/invalid.sh@34 -- # nvmftestinit 00:12:31.487 14:58:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:31.487 14:58:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.487 14:58:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:31.487 14:58:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:31.487 14:58:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:31.487 14:58:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.487 14:58:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.487 14:58:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.488 14:58:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:31.488 14:58:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:31.488 14:58:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:31.488 14:58:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:31.488 14:58:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:31.488 14:58:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:31.488 14:58:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.488 14:58:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.488 14:58:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.488 14:58:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:31.488 14:58:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.488 14:58:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.488 14:58:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.488 14:58:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.488 14:58:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.488 14:58:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.488 14:58:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.488 14:58:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.488 14:58:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:31.488 14:58:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:31.488 Cannot find device "nvmf_tgt_br" 00:12:31.488 14:58:55 -- nvmf/common.sh@154 -- # true 00:12:31.488 14:58:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.746 Cannot find device "nvmf_tgt_br2" 00:12:31.746 14:58:55 -- nvmf/common.sh@155 -- # true 00:12:31.746 14:58:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:31.746 14:58:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:31.746 Cannot find device "nvmf_tgt_br" 00:12:31.746 14:58:55 -- nvmf/common.sh@157 -- # true 00:12:31.746 14:58:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:31.746 Cannot find device "nvmf_tgt_br2" 00:12:31.746 14:58:55 -- nvmf/common.sh@158 -- # true 00:12:31.746 14:58:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:31.746 14:58:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:31.746 14:58:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.746 14:58:55 -- nvmf/common.sh@161 -- # true 00:12:31.746 14:58:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.746 14:58:55 -- nvmf/common.sh@162 -- # true 00:12:31.746 14:58:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.746 14:58:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.746 14:58:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.746 14:58:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.746 14:58:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.746 14:58:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.746 14:58:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.746 14:58:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.746 14:58:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.746 14:58:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:31.746 14:58:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:31.746 14:58:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:31.746 14:58:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:31.746 14:58:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.746 14:58:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.746 14:58:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.746 14:58:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:31.746 14:58:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:31.746 14:58:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.746 14:58:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.746 14:58:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.005 14:58:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.005 14:58:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.005 14:58:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:32.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:12:32.005 00:12:32.005 --- 10.0.0.2 ping statistics --- 00:12:32.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.005 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:32.005 14:58:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:32.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:12:32.005 00:12:32.005 --- 10.0.0.3 ping statistics --- 00:12:32.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.005 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:32.005 14:58:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:32.005 00:12:32.005 --- 10.0.0.1 ping statistics --- 00:12:32.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.005 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:32.005 14:58:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.005 14:58:55 -- nvmf/common.sh@421 -- # return 0 00:12:32.005 14:58:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:32.005 14:58:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.005 14:58:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:32.005 14:58:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:32.005 14:58:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.005 14:58:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:32.005 14:58:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:32.005 14:58:55 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:32.005 14:58:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:32.005 14:58:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.005 14:58:55 -- common/autotest_common.sh@10 -- # set +x 00:12:32.005 14:58:55 -- nvmf/common.sh@469 -- # nvmfpid=78613 00:12:32.005 14:58:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.005 14:58:55 -- nvmf/common.sh@470 -- # waitforlisten 78613 00:12:32.005 14:58:55 -- common/autotest_common.sh@829 -- # '[' -z 78613 ']' 00:12:32.005 14:58:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.005 14:58:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.005 14:58:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.005 14:58:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.005 14:58:55 -- common/autotest_common.sh@10 -- # set +x 00:12:32.005 [2024-11-18 14:58:55.443943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.006 [2024-11-18 14:58:55.444185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.006 [2024-11-18 14:58:55.584205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.264 [2024-11-18 14:58:55.641332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:32.264 [2024-11-18 14:58:55.641757] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.264 [2024-11-18 14:58:55.641806] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.264 [2024-11-18 14:58:55.641925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.264 [2024-11-18 14:58:55.642130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.264 [2024-11-18 14:58:55.642285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.264 [2024-11-18 14:58:55.642390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.264 [2024-11-18 14:58:55.642394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.831 14:58:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.831 14:58:56 -- common/autotest_common.sh@862 -- # return 0 00:12:32.831 14:58:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.831 14:58:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.831 14:58:56 -- common/autotest_common.sh@10 -- # set +x 00:12:33.090 14:58:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.090 14:58:56 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:33.090 14:58:56 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1771 00:12:33.090 [2024-11-18 14:58:56.622838] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:33.090 14:58:56 -- target/invalid.sh@40 -- # out='2024/11/18 14:58:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1771 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:33.090 request: 00:12:33.090 { 00:12:33.090 "method": "nvmf_create_subsystem", 00:12:33.090 "params": { 00:12:33.090 "nqn": "nqn.2016-06.io.spdk:cnode1771", 00:12:33.090 "tgt_name": "foobar" 00:12:33.090 } 00:12:33.090 } 00:12:33.090 Got JSON-RPC error response 00:12:33.090 GoRPCClient: error on JSON-RPC call' 00:12:33.090 14:58:56 -- target/invalid.sh@41 -- # [[ 2024/11/18 14:58:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1771 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:33.090 request: 00:12:33.090 { 00:12:33.090 "method": "nvmf_create_subsystem", 00:12:33.090 "params": { 00:12:33.090 "nqn": "nqn.2016-06.io.spdk:cnode1771", 00:12:33.090 "tgt_name": "foobar" 00:12:33.090 } 00:12:33.090 } 00:12:33.090 Got JSON-RPC error response 00:12:33.090 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:33.090 14:58:56 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:33.090 14:58:56 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5711 00:12:33.349 [2024-11-18 14:58:56.927259] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5711: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:33.609 14:58:56 -- target/invalid.sh@45 -- # out='2024/11/18 14:58:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5711 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:33.609 request: 00:12:33.609 { 00:12:33.609 "method": "nvmf_create_subsystem", 00:12:33.609 "params": { 00:12:33.609 "nqn": "nqn.2016-06.io.spdk:cnode5711", 00:12:33.609 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:33.609 } 00:12:33.609 } 00:12:33.609 Got JSON-RPC error response 00:12:33.609 GoRPCClient: error on JSON-RPC call' 00:12:33.609 14:58:56 -- target/invalid.sh@46 -- # [[ 2024/11/18 14:58:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5711 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:33.609 request: 00:12:33.609 { 00:12:33.609 "method": "nvmf_create_subsystem", 00:12:33.609 "params": { 00:12:33.609 "nqn": "nqn.2016-06.io.spdk:cnode5711", 00:12:33.609 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:33.609 } 00:12:33.609 } 00:12:33.609 Got JSON-RPC error response 00:12:33.609 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:33.609 14:58:56 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:33.609 14:58:56 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20307 00:12:33.609 [2024-11-18 14:58:57.159591] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20307: invalid model number 'SPDK_Controller' 00:12:33.609 14:58:57 -- target/invalid.sh@50 -- # out='2024/11/18 14:58:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode20307], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:33.609 request: 00:12:33.609 { 00:12:33.609 "method": "nvmf_create_subsystem", 00:12:33.609 "params": { 00:12:33.609 "nqn": "nqn.2016-06.io.spdk:cnode20307", 00:12:33.609 "model_number": "SPDK_Controller\u001f" 00:12:33.609 } 00:12:33.609 } 00:12:33.609 Got JSON-RPC error response 00:12:33.609 GoRPCClient: error on JSON-RPC call' 00:12:33.609 14:58:57 -- target/invalid.sh@51 -- # [[ 2024/11/18 14:58:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode20307], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:33.609 request: 00:12:33.609 { 00:12:33.609 "method": "nvmf_create_subsystem", 00:12:33.609 "params": { 00:12:33.609 "nqn": "nqn.2016-06.io.spdk:cnode20307", 00:12:33.609 "model_number": "SPDK_Controller\u001f" 00:12:33.609 } 00:12:33.609 } 00:12:33.609 Got JSON-RPC error response 00:12:33.609 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:33.609 14:58:57 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:33.609 14:58:57 -- target/invalid.sh@19 -- # local length=21 ll 00:12:33.609 14:58:57 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:33.609 14:58:57 -- target/invalid.sh@21 -- # local chars 00:12:33.609 14:58:57 -- target/invalid.sh@22 -- # local string 00:12:33.609 14:58:57 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:33.609 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.609 14:58:57 -- target/invalid.sh@25 -- # printf %x 47 00:12:33.609 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:33.609 14:58:57 -- target/invalid.sh@25 -- # string+=/ 00:12:33.609 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.609 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.609 14:58:57 -- target/invalid.sh@25 -- # printf %x 59 00:12:33.609 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:33.609 14:58:57 -- target/invalid.sh@25 -- # string+=';' 00:12:33.609 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.609 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 78 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=N 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 67 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=C 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 80 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=P 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 82 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=R 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 109 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=m 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 52 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=4 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 111 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=o 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 46 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=. 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 43 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=+ 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 45 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=- 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 123 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+='{' 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # printf %x 64 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:33.868 14:58:57 -- target/invalid.sh@25 -- # string+=@ 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.868 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # printf %x 57 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # string+=9 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # printf %x 60 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # string+='<' 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # printf %x 105 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # string+=i 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # printf %x 54 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # string+=6 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # printf %x 33 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # string+='!' 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # printf %x 39 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # string+=\' 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # printf %x 120 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:33.869 14:58:57 -- target/invalid.sh@25 -- # string+=x 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.869 14:58:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.869 14:58:57 -- target/invalid.sh@28 -- # [[ / == \- ]] 00:12:33.869 14:58:57 -- target/invalid.sh@31 -- # echo '/;NCPRm4o.+-{@9 /dev/null' 00:12:36.974 14:59:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.974 14:59:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:36.974 00:12:36.974 real 0m5.623s 00:12:36.974 user 0m22.303s 00:12:36.974 sys 0m1.235s 00:12:36.974 ************************************ 00:12:36.974 END TEST nvmf_invalid 00:12:36.974 ************************************ 00:12:36.974 14:59:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:36.974 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:12:36.974 14:59:00 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:36.974 14:59:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:36.974 14:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.974 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:12:36.974 ************************************ 00:12:36.974 START TEST nvmf_abort 00:12:36.974 ************************************ 00:12:36.974 14:59:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:37.233 * Looking for test storage... 00:12:37.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:37.233 14:59:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:37.233 14:59:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:37.233 14:59:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:37.233 14:59:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:37.233 14:59:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:37.233 14:59:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:37.233 14:59:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:37.233 14:59:00 -- scripts/common.sh@335 -- # IFS=.-: 00:12:37.233 14:59:00 -- scripts/common.sh@335 -- # read -ra ver1 00:12:37.233 14:59:00 -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.233 14:59:00 -- scripts/common.sh@336 -- # read -ra ver2 00:12:37.233 14:59:00 -- scripts/common.sh@337 -- # local 'op=<' 00:12:37.233 14:59:00 -- scripts/common.sh@339 -- # ver1_l=2 00:12:37.233 14:59:00 -- scripts/common.sh@340 -- # ver2_l=1 00:12:37.233 14:59:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:37.233 14:59:00 -- scripts/common.sh@343 -- # case "$op" in 00:12:37.233 14:59:00 -- scripts/common.sh@344 -- # : 1 00:12:37.233 14:59:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:37.233 14:59:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.233 14:59:00 -- scripts/common.sh@364 -- # decimal 1 00:12:37.233 14:59:00 -- scripts/common.sh@352 -- # local d=1 00:12:37.233 14:59:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.233 14:59:00 -- scripts/common.sh@354 -- # echo 1 00:12:37.233 14:59:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:37.233 14:59:00 -- scripts/common.sh@365 -- # decimal 2 00:12:37.233 14:59:00 -- scripts/common.sh@352 -- # local d=2 00:12:37.233 14:59:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.233 14:59:00 -- scripts/common.sh@354 -- # echo 2 00:12:37.233 14:59:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:37.233 14:59:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:37.233 14:59:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:37.233 14:59:00 -- scripts/common.sh@367 -- # return 0 00:12:37.233 14:59:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.233 14:59:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:37.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.233 --rc genhtml_branch_coverage=1 00:12:37.233 --rc genhtml_function_coverage=1 00:12:37.233 --rc genhtml_legend=1 00:12:37.233 --rc geninfo_all_blocks=1 00:12:37.233 --rc geninfo_unexecuted_blocks=1 00:12:37.233 00:12:37.233 ' 00:12:37.233 14:59:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:37.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.233 --rc genhtml_branch_coverage=1 00:12:37.233 --rc genhtml_function_coverage=1 00:12:37.233 --rc genhtml_legend=1 00:12:37.233 --rc geninfo_all_blocks=1 00:12:37.233 --rc geninfo_unexecuted_blocks=1 00:12:37.233 00:12:37.233 ' 00:12:37.233 14:59:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:37.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.233 --rc genhtml_branch_coverage=1 00:12:37.233 --rc genhtml_function_coverage=1 00:12:37.233 --rc genhtml_legend=1 00:12:37.233 --rc geninfo_all_blocks=1 00:12:37.233 --rc geninfo_unexecuted_blocks=1 00:12:37.233 00:12:37.233 ' 00:12:37.233 14:59:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:37.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.233 --rc genhtml_branch_coverage=1 00:12:37.233 --rc genhtml_function_coverage=1 00:12:37.233 --rc genhtml_legend=1 00:12:37.233 --rc geninfo_all_blocks=1 00:12:37.233 --rc geninfo_unexecuted_blocks=1 00:12:37.233 00:12:37.233 ' 00:12:37.233 14:59:00 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:37.233 14:59:00 -- nvmf/common.sh@7 -- # uname -s 00:12:37.233 14:59:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.233 14:59:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.233 14:59:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.233 14:59:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.233 14:59:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.233 14:59:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.233 14:59:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.233 14:59:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.233 14:59:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.233 14:59:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.233 14:59:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:37.233 14:59:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:37.233 14:59:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.233 14:59:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.233 14:59:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:37.233 14:59:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.233 14:59:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.233 14:59:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.233 14:59:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.233 14:59:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.233 14:59:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.234 14:59:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.234 14:59:00 -- paths/export.sh@5 -- # export PATH 00:12:37.234 14:59:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.234 14:59:00 -- nvmf/common.sh@46 -- # : 0 00:12:37.234 14:59:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:37.234 14:59:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:37.234 14:59:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:37.234 14:59:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.234 14:59:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.234 14:59:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:37.234 14:59:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:37.234 14:59:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:37.234 14:59:00 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:37.234 14:59:00 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:37.234 14:59:00 -- target/abort.sh@14 -- # nvmftestinit 00:12:37.234 14:59:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:37.234 14:59:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.234 14:59:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:37.234 14:59:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:37.234 14:59:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:37.234 14:59:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.234 14:59:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.234 14:59:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.234 14:59:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:37.234 14:59:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:37.234 14:59:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:37.234 14:59:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:37.234 14:59:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:37.234 14:59:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:37.234 14:59:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.234 14:59:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.234 14:59:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:37.234 14:59:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:37.234 14:59:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:37.234 14:59:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:37.234 14:59:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:37.234 14:59:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.234 14:59:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:37.234 14:59:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:37.234 14:59:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:37.234 14:59:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:37.234 14:59:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:37.234 14:59:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:37.234 Cannot find device "nvmf_tgt_br" 00:12:37.234 14:59:00 -- nvmf/common.sh@154 -- # true 00:12:37.234 14:59:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.234 Cannot find device "nvmf_tgt_br2" 00:12:37.234 14:59:00 -- nvmf/common.sh@155 -- # true 00:12:37.234 14:59:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:37.234 14:59:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:37.234 Cannot find device "nvmf_tgt_br" 00:12:37.234 14:59:00 -- nvmf/common.sh@157 -- # true 00:12:37.234 14:59:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:37.234 Cannot find device "nvmf_tgt_br2" 00:12:37.234 14:59:00 -- nvmf/common.sh@158 -- # true 00:12:37.234 14:59:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:37.234 14:59:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:37.493 14:59:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:37.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.493 14:59:00 -- nvmf/common.sh@161 -- # true 00:12:37.493 14:59:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:37.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.493 14:59:00 -- nvmf/common.sh@162 -- # true 00:12:37.493 14:59:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:37.493 14:59:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:37.493 14:59:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:37.494 14:59:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:37.494 14:59:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:37.494 14:59:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:37.494 14:59:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:37.494 14:59:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:37.494 14:59:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:37.494 14:59:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:37.494 14:59:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:37.494 14:59:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:37.494 14:59:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:37.494 14:59:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.494 14:59:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.494 14:59:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.494 14:59:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:37.494 14:59:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:37.494 14:59:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.494 14:59:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.494 14:59:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.494 14:59:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.494 14:59:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.494 14:59:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:37.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:12:37.494 00:12:37.494 --- 10.0.0.2 ping statistics --- 00:12:37.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.494 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:37.494 14:59:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:37.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:12:37.494 00:12:37.494 --- 10.0.0.3 ping statistics --- 00:12:37.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.494 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:37.494 14:59:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:37.494 00:12:37.494 --- 10.0.0.1 ping statistics --- 00:12:37.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.494 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:37.494 14:59:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.494 14:59:01 -- nvmf/common.sh@421 -- # return 0 00:12:37.494 14:59:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:37.494 14:59:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.494 14:59:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:37.494 14:59:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:37.494 14:59:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.494 14:59:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:37.494 14:59:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:37.494 14:59:01 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:37.494 14:59:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:37.494 14:59:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.494 14:59:01 -- common/autotest_common.sh@10 -- # set +x 00:12:37.494 14:59:01 -- nvmf/common.sh@469 -- # nvmfpid=79128 00:12:37.494 14:59:01 -- nvmf/common.sh@470 -- # waitforlisten 79128 00:12:37.494 14:59:01 -- common/autotest_common.sh@829 -- # '[' -z 79128 ']' 00:12:37.494 14:59:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.494 14:59:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.494 14:59:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.494 14:59:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.494 14:59:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:37.494 14:59:01 -- common/autotest_common.sh@10 -- # set +x 00:12:37.494 [2024-11-18 14:59:01.079614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:37.494 [2024-11-18 14:59:01.079674] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.753 [2024-11-18 14:59:01.214216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:37.753 [2024-11-18 14:59:01.287717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.753 [2024-11-18 14:59:01.287865] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.753 [2024-11-18 14:59:01.287878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.753 [2024-11-18 14:59:01.287886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.753 [2024-11-18 14:59:01.288063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.753 [2024-11-18 14:59:01.288194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.753 [2024-11-18 14:59:01.288200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.691 14:59:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.691 14:59:02 -- common/autotest_common.sh@862 -- # return 0 00:12:38.691 14:59:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:38.691 14:59:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.691 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.691 14:59:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.691 14:59:02 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:38.691 14:59:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.691 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.691 [2024-11-18 14:59:02.091894] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.691 14:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.691 14:59:02 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:38.691 14:59:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.691 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.692 Malloc0 00:12:38.692 14:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.692 14:59:02 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:38.692 14:59:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.692 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.692 Delay0 00:12:38.692 14:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.692 14:59:02 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:38.692 14:59:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.692 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.692 14:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.692 14:59:02 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:38.692 14:59:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.692 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.692 14:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.692 14:59:02 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:38.692 14:59:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.692 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.692 [2024-11-18 14:59:02.170325] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.692 14:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.692 14:59:02 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.692 14:59:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.692 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.692 14:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.692 14:59:02 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:38.951 [2024-11-18 14:59:02.356328] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:40.857 Initializing NVMe Controllers 00:12:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:40.857 controller IO queue size 128 less than required 00:12:40.857 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:40.857 Initialization complete. Launching workers. 00:12:40.857 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38061 00:12:40.857 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38122, failed to submit 62 00:12:40.857 success 38061, unsuccess 61, failed 0 00:12:40.857 14:59:04 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.857 14:59:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.857 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:12:40.857 14:59:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.857 14:59:04 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:40.857 14:59:04 -- target/abort.sh@38 -- # nvmftestfini 00:12:40.857 14:59:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:40.857 14:59:04 -- nvmf/common.sh@116 -- # sync 00:12:40.857 14:59:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:40.857 14:59:04 -- nvmf/common.sh@119 -- # set +e 00:12:40.857 14:59:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:40.857 14:59:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:41.116 rmmod nvme_tcp 00:12:41.116 rmmod nvme_fabrics 00:12:41.116 rmmod nvme_keyring 00:12:41.116 14:59:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:41.116 14:59:04 -- nvmf/common.sh@123 -- # set -e 00:12:41.116 14:59:04 -- nvmf/common.sh@124 -- # return 0 00:12:41.116 14:59:04 -- nvmf/common.sh@477 -- # '[' -n 79128 ']' 00:12:41.116 14:59:04 -- nvmf/common.sh@478 -- # killprocess 79128 00:12:41.116 14:59:04 -- common/autotest_common.sh@936 -- # '[' -z 79128 ']' 00:12:41.116 14:59:04 -- common/autotest_common.sh@940 -- # kill -0 79128 00:12:41.116 14:59:04 -- common/autotest_common.sh@941 -- # uname 00:12:41.116 14:59:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:41.116 14:59:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79128 00:12:41.116 14:59:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:41.116 14:59:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:41.116 killing process with pid 79128 00:12:41.116 14:59:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79128' 00:12:41.116 14:59:04 -- common/autotest_common.sh@955 -- # kill 79128 00:12:41.116 14:59:04 -- common/autotest_common.sh@960 -- # wait 79128 00:12:41.375 14:59:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:41.375 14:59:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:41.375 14:59:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:41.375 14:59:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.375 14:59:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:41.375 14:59:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.375 14:59:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.375 14:59:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.375 14:59:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:41.375 00:12:41.375 real 0m4.318s 00:12:41.375 user 0m12.400s 00:12:41.375 sys 0m1.009s 00:12:41.375 14:59:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:41.375 ************************************ 00:12:41.375 END TEST nvmf_abort 00:12:41.375 ************************************ 00:12:41.375 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:12:41.375 14:59:04 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:41.375 14:59:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:41.375 14:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.375 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:12:41.375 ************************************ 00:12:41.375 START TEST nvmf_ns_hotplug_stress 00:12:41.375 ************************************ 00:12:41.375 14:59:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:41.375 * Looking for test storage... 00:12:41.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:41.639 14:59:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:41.639 14:59:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:41.639 14:59:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:41.639 14:59:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:41.639 14:59:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:41.639 14:59:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:41.639 14:59:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:41.639 14:59:05 -- scripts/common.sh@335 -- # IFS=.-: 00:12:41.639 14:59:05 -- scripts/common.sh@335 -- # read -ra ver1 00:12:41.639 14:59:05 -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.639 14:59:05 -- scripts/common.sh@336 -- # read -ra ver2 00:12:41.639 14:59:05 -- scripts/common.sh@337 -- # local 'op=<' 00:12:41.639 14:59:05 -- scripts/common.sh@339 -- # ver1_l=2 00:12:41.639 14:59:05 -- scripts/common.sh@340 -- # ver2_l=1 00:12:41.639 14:59:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:41.639 14:59:05 -- scripts/common.sh@343 -- # case "$op" in 00:12:41.639 14:59:05 -- scripts/common.sh@344 -- # : 1 00:12:41.639 14:59:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:41.639 14:59:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.639 14:59:05 -- scripts/common.sh@364 -- # decimal 1 00:12:41.639 14:59:05 -- scripts/common.sh@352 -- # local d=1 00:12:41.639 14:59:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.639 14:59:05 -- scripts/common.sh@354 -- # echo 1 00:12:41.640 14:59:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:41.640 14:59:05 -- scripts/common.sh@365 -- # decimal 2 00:12:41.640 14:59:05 -- scripts/common.sh@352 -- # local d=2 00:12:41.640 14:59:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.640 14:59:05 -- scripts/common.sh@354 -- # echo 2 00:12:41.640 14:59:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:41.640 14:59:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:41.640 14:59:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:41.640 14:59:05 -- scripts/common.sh@367 -- # return 0 00:12:41.640 14:59:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.640 14:59:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:41.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.640 --rc genhtml_branch_coverage=1 00:12:41.640 --rc genhtml_function_coverage=1 00:12:41.640 --rc genhtml_legend=1 00:12:41.640 --rc geninfo_all_blocks=1 00:12:41.640 --rc geninfo_unexecuted_blocks=1 00:12:41.640 00:12:41.640 ' 00:12:41.640 14:59:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:41.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.640 --rc genhtml_branch_coverage=1 00:12:41.640 --rc genhtml_function_coverage=1 00:12:41.640 --rc genhtml_legend=1 00:12:41.640 --rc geninfo_all_blocks=1 00:12:41.640 --rc geninfo_unexecuted_blocks=1 00:12:41.640 00:12:41.640 ' 00:12:41.640 14:59:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:41.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.640 --rc genhtml_branch_coverage=1 00:12:41.640 --rc genhtml_function_coverage=1 00:12:41.640 --rc genhtml_legend=1 00:12:41.640 --rc geninfo_all_blocks=1 00:12:41.640 --rc geninfo_unexecuted_blocks=1 00:12:41.640 00:12:41.640 ' 00:12:41.640 14:59:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:41.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.640 --rc genhtml_branch_coverage=1 00:12:41.640 --rc genhtml_function_coverage=1 00:12:41.640 --rc genhtml_legend=1 00:12:41.640 --rc geninfo_all_blocks=1 00:12:41.640 --rc geninfo_unexecuted_blocks=1 00:12:41.640 00:12:41.640 ' 00:12:41.640 14:59:05 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.640 14:59:05 -- nvmf/common.sh@7 -- # uname -s 00:12:41.640 14:59:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.640 14:59:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.640 14:59:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.640 14:59:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.640 14:59:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.640 14:59:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.640 14:59:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.640 14:59:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.640 14:59:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.640 14:59:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.640 14:59:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:41.640 14:59:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:12:41.640 14:59:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.640 14:59:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.640 14:59:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.640 14:59:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.640 14:59:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.640 14:59:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.640 14:59:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.640 14:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.640 14:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.640 14:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.640 14:59:05 -- paths/export.sh@5 -- # export PATH 00:12:41.640 14:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.640 14:59:05 -- nvmf/common.sh@46 -- # : 0 00:12:41.640 14:59:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:41.640 14:59:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:41.640 14:59:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:41.640 14:59:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.640 14:59:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.640 14:59:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:41.640 14:59:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:41.640 14:59:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:41.640 14:59:05 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.640 14:59:05 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:41.640 14:59:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:41.640 14:59:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.640 14:59:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:41.640 14:59:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:41.640 14:59:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:41.640 14:59:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.640 14:59:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.640 14:59:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.640 14:59:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:41.640 14:59:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:41.640 14:59:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:41.640 14:59:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:41.640 14:59:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:41.640 14:59:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:41.640 14:59:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.640 14:59:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.640 14:59:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:41.640 14:59:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:41.640 14:59:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.640 14:59:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.640 14:59:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.640 14:59:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.640 14:59:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.640 14:59:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.640 14:59:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.640 14:59:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.640 14:59:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:41.640 14:59:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:41.640 Cannot find device "nvmf_tgt_br" 00:12:41.640 14:59:05 -- nvmf/common.sh@154 -- # true 00:12:41.640 14:59:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.640 Cannot find device "nvmf_tgt_br2" 00:12:41.640 14:59:05 -- nvmf/common.sh@155 -- # true 00:12:41.640 14:59:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:41.640 14:59:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:41.640 Cannot find device "nvmf_tgt_br" 00:12:41.640 14:59:05 -- nvmf/common.sh@157 -- # true 00:12:41.640 14:59:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:41.640 Cannot find device "nvmf_tgt_br2" 00:12:41.640 14:59:05 -- nvmf/common.sh@158 -- # true 00:12:41.640 14:59:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:41.640 14:59:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:41.949 14:59:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.949 14:59:05 -- nvmf/common.sh@161 -- # true 00:12:41.949 14:59:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.949 14:59:05 -- nvmf/common.sh@162 -- # true 00:12:41.949 14:59:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:41.949 14:59:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:41.949 14:59:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:41.949 14:59:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:41.949 14:59:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:41.949 14:59:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:41.949 14:59:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:41.949 14:59:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:41.949 14:59:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:41.949 14:59:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:41.949 14:59:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:41.949 14:59:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:41.949 14:59:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:41.949 14:59:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:41.949 14:59:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:41.949 14:59:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:41.949 14:59:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:41.949 14:59:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:41.949 14:59:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:41.949 14:59:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:41.949 14:59:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:41.949 14:59:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:41.949 14:59:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:41.949 14:59:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:41.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:41.949 00:12:41.949 --- 10.0.0.2 ping statistics --- 00:12:41.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.949 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:41.949 14:59:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:41.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:41.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:12:41.949 00:12:41.949 --- 10.0.0.3 ping statistics --- 00:12:41.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.949 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:41.949 14:59:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:41.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:41.949 00:12:41.949 --- 10.0.0.1 ping statistics --- 00:12:41.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.949 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:41.949 14:59:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.949 14:59:05 -- nvmf/common.sh@421 -- # return 0 00:12:41.949 14:59:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:41.949 14:59:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.949 14:59:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:41.949 14:59:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:41.949 14:59:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.949 14:59:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:41.949 14:59:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:41.949 14:59:05 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:41.949 14:59:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:41.949 14:59:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:41.949 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:12:41.949 14:59:05 -- nvmf/common.sh@469 -- # nvmfpid=79409 00:12:41.949 14:59:05 -- nvmf/common.sh@470 -- # waitforlisten 79409 00:12:41.949 14:59:05 -- common/autotest_common.sh@829 -- # '[' -z 79409 ']' 00:12:41.949 14:59:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:41.949 14:59:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.949 14:59:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.949 14:59:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.949 14:59:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.949 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:12:41.949 [2024-11-18 14:59:05.497579] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:41.949 [2024-11-18 14:59:05.497674] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.234 [2024-11-18 14:59:05.639468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:42.234 [2024-11-18 14:59:05.721983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:42.234 [2024-11-18 14:59:05.722183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.234 [2024-11-18 14:59:05.722200] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.234 [2024-11-18 14:59:05.722212] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.234 [2024-11-18 14:59:05.722382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.234 [2024-11-18 14:59:05.722929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.234 [2024-11-18 14:59:05.722990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.171 14:59:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.171 14:59:06 -- common/autotest_common.sh@862 -- # return 0 00:12:43.171 14:59:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:43.171 14:59:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.171 14:59:06 -- common/autotest_common.sh@10 -- # set +x 00:12:43.171 14:59:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.171 14:59:06 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:43.171 14:59:06 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:43.430 [2024-11-18 14:59:06.829643] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.430 14:59:06 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:43.689 14:59:07 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.947 [2024-11-18 14:59:07.396236] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.947 14:59:07 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:44.206 14:59:07 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:44.465 Malloc0 00:12:44.465 14:59:07 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:44.724 Delay0 00:12:44.724 14:59:08 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.724 14:59:08 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:44.982 NULL1 00:12:44.982 14:59:08 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:45.241 14:59:08 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:45.241 14:59:08 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79541 00:12:45.241 14:59:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:45.241 14:59:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.626 Read completed with error (sct=0, sc=11) 00:12:46.626 14:59:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.627 14:59:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:46.627 14:59:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:46.885 true 00:12:46.885 14:59:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:46.885 14:59:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.821 14:59:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.080 14:59:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:48.080 14:59:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:48.080 true 00:12:48.080 14:59:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:48.080 14:59:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.338 14:59:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.596 14:59:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:48.596 14:59:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:48.855 true 00:12:48.855 14:59:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:48.855 14:59:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.790 14:59:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.790 14:59:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:49.790 14:59:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:50.049 true 00:12:50.049 14:59:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:50.049 14:59:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.308 14:59:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.567 14:59:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:50.567 14:59:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:50.826 true 00:12:50.826 14:59:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:50.826 14:59:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.762 14:59:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.021 14:59:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:52.021 14:59:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:52.279 true 00:12:52.279 14:59:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:52.279 14:59:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.538 14:59:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.538 14:59:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:52.538 14:59:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:52.797 true 00:12:52.797 14:59:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:52.797 14:59:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.732 14:59:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.990 14:59:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:53.990 14:59:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:54.249 true 00:12:54.249 14:59:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:54.249 14:59:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.508 14:59:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.508 14:59:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:54.508 14:59:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:54.767 true 00:12:54.767 14:59:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:54.767 14:59:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.704 14:59:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.962 14:59:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:55.962 14:59:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:56.221 true 00:12:56.221 14:59:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:56.221 14:59:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.480 14:59:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.738 14:59:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:56.738 14:59:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:56.738 true 00:12:56.738 14:59:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:56.738 14:59:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.674 14:59:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.933 14:59:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:57.933 14:59:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:58.191 true 00:12:58.191 14:59:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:58.191 14:59:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.450 14:59:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.709 14:59:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:58.709 14:59:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:58.968 true 00:12:58.968 14:59:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:12:58.968 14:59:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.903 14:59:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.903 14:59:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:59.903 14:59:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:00.162 true 00:13:00.420 14:59:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:00.420 14:59:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.421 14:59:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.679 14:59:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:00.679 14:59:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:00.938 true 00:13:00.938 14:59:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:00.938 14:59:24 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.874 14:59:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.133 14:59:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:02.133 14:59:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:02.133 true 00:13:02.133 14:59:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:02.133 14:59:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.392 14:59:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.651 14:59:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:02.651 14:59:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:02.910 true 00:13:02.910 14:59:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:02.910 14:59:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.846 14:59:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.104 14:59:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:04.104 14:59:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:04.363 true 00:13:04.363 14:59:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:04.363 14:59:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.622 14:59:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.622 14:59:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:04.622 14:59:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:04.881 true 00:13:04.881 14:59:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:04.881 14:59:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.825 14:59:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.085 14:59:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:06.085 14:59:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:06.085 true 00:13:06.085 14:59:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:06.085 14:59:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.343 14:59:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.602 14:59:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:06.602 14:59:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:06.861 true 00:13:06.861 14:59:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:06.861 14:59:30 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.796 14:59:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.056 14:59:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:08.056 14:59:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:08.315 true 00:13:08.315 14:59:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:08.315 14:59:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.574 14:59:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.574 14:59:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:08.574 14:59:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:08.832 true 00:13:08.832 14:59:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:08.832 14:59:32 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.814 14:59:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.163 14:59:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:10.163 14:59:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:10.422 true 00:13:10.422 14:59:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:10.422 14:59:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.422 14:59:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.682 14:59:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:10.682 14:59:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:10.940 true 00:13:10.940 14:59:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:10.940 14:59:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.877 14:59:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.136 14:59:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:12.136 14:59:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:12.394 true 00:13:12.394 14:59:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:12.394 14:59:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.651 14:59:36 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.911 14:59:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:12.911 14:59:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:12.911 true 00:13:13.170 14:59:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:13.170 14:59:36 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.170 14:59:36 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.429 14:59:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:13.429 14:59:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:13.688 true 00:13:13.688 14:59:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:13.688 14:59:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.067 14:59:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.067 14:59:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:15.067 14:59:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:15.326 true 00:13:15.326 14:59:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:15.326 14:59:38 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.263 Initializing NVMe Controllers 00:13:16.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.263 Controller IO queue size 128, less than required. 00:13:16.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:16.263 Controller IO queue size 128, less than required. 00:13:16.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:16.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:16.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:16.263 Initialization complete. Launching workers. 00:13:16.263 ======================================================== 00:13:16.263 Latency(us) 00:13:16.263 Device Information : IOPS MiB/s Average min max 00:13:16.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 552.23 0.27 133236.84 2025.03 1101379.77 00:13:16.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15103.54 7.37 8474.44 1969.29 540189.73 00:13:16.263 ======================================================== 00:13:16.263 Total : 15655.77 7.64 12875.22 1969.29 1101379.77 00:13:16.263 00:13:16.263 14:59:39 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.263 14:59:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:16.263 14:59:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:16.522 true 00:13:16.522 14:59:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79541 00:13:16.522 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79541) - No such process 00:13:16.522 14:59:39 -- target/ns_hotplug_stress.sh@53 -- # wait 79541 00:13:16.522 14:59:39 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.781 14:59:40 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.040 14:59:40 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:17.040 14:59:40 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:17.040 14:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:17.040 14:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.040 14:59:40 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:17.040 null0 00:13:17.299 14:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:17.299 14:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.299 14:59:40 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:17.299 null1 00:13:17.299 14:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:17.299 14:59:40 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.299 14:59:40 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:17.558 null2 00:13:17.558 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:17.558 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.558 14:59:41 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:17.817 null3 00:13:17.817 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:17.817 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:17.817 14:59:41 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:18.077 null4 00:13:18.077 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:18.077 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:18.077 14:59:41 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:18.336 null5 00:13:18.336 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:18.336 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:18.336 14:59:41 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:18.336 null6 00:13:18.336 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:18.336 14:59:41 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:18.336 14:59:41 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:18.595 null7 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:18.595 14:59:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:18.596 14:59:42 -- target/ns_hotplug_stress.sh@66 -- # wait 80596 80598 80600 80601 80603 80605 80608 80609 00:13:18.854 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.854 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.855 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:18.855 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:18.855 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.855 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.113 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:19.372 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:19.631 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:19.631 14:59:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:19.631 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:19.631 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:19.631 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.631 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.632 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:19.890 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.149 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:20.407 14:59:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.666 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.925 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.184 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.443 14:59:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:21.702 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.960 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:22.218 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.219 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.219 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:22.219 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:22.219 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.219 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.219 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:22.477 14:59:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.477 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:22.736 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.737 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.737 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.996 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.255 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:23.514 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.514 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:23.514 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:23.514 14:59:46 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:23.514 14:59:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.514 14:59:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.514 14:59:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:23.514 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.514 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:23.773 14:59:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:24.031 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.031 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.031 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.031 14:59:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.031 14:59:47 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:24.031 14:59:47 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:24.031 14:59:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:24.031 14:59:47 -- nvmf/common.sh@116 -- # sync 00:13:24.031 14:59:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:24.031 14:59:47 -- nvmf/common.sh@119 -- # set +e 00:13:24.031 14:59:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:24.031 14:59:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:24.031 rmmod nvme_tcp 00:13:24.031 rmmod nvme_fabrics 00:13:24.031 rmmod nvme_keyring 00:13:24.031 14:59:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:24.031 14:59:47 -- nvmf/common.sh@123 -- # set -e 00:13:24.031 14:59:47 -- nvmf/common.sh@124 -- # return 0 00:13:24.031 14:59:47 -- nvmf/common.sh@477 -- # '[' -n 79409 ']' 00:13:24.031 14:59:47 -- nvmf/common.sh@478 -- # killprocess 79409 00:13:24.031 14:59:47 -- common/autotest_common.sh@936 -- # '[' -z 79409 ']' 00:13:24.031 14:59:47 -- common/autotest_common.sh@940 -- # kill -0 79409 00:13:24.290 14:59:47 -- common/autotest_common.sh@941 -- # uname 00:13:24.290 14:59:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:24.290 14:59:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79409 00:13:24.290 14:59:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:24.290 14:59:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:24.290 14:59:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79409' 00:13:24.290 killing process with pid 79409 00:13:24.290 14:59:47 -- common/autotest_common.sh@955 -- # kill 79409 00:13:24.290 14:59:47 -- common/autotest_common.sh@960 -- # wait 79409 00:13:24.549 14:59:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:24.549 14:59:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:24.549 14:59:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:24.549 14:59:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.549 14:59:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:24.549 14:59:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.549 14:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.549 14:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.549 14:59:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:24.549 00:13:24.549 real 0m43.042s 00:13:24.549 user 3m24.241s 00:13:24.549 sys 0m11.694s 00:13:24.549 14:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:24.549 14:59:47 -- common/autotest_common.sh@10 -- # set +x 00:13:24.549 ************************************ 00:13:24.549 END TEST nvmf_ns_hotplug_stress 00:13:24.549 ************************************ 00:13:24.549 14:59:47 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:24.549 14:59:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:24.549 14:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.549 14:59:47 -- common/autotest_common.sh@10 -- # set +x 00:13:24.549 ************************************ 00:13:24.549 START TEST nvmf_connect_stress 00:13:24.549 ************************************ 00:13:24.549 14:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:24.549 * Looking for test storage... 00:13:24.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:24.549 14:59:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:24.549 14:59:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:24.549 14:59:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:24.809 14:59:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:24.809 14:59:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:24.809 14:59:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:24.809 14:59:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:24.809 14:59:48 -- scripts/common.sh@335 -- # IFS=.-: 00:13:24.809 14:59:48 -- scripts/common.sh@335 -- # read -ra ver1 00:13:24.809 14:59:48 -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.809 14:59:48 -- scripts/common.sh@336 -- # read -ra ver2 00:13:24.809 14:59:48 -- scripts/common.sh@337 -- # local 'op=<' 00:13:24.809 14:59:48 -- scripts/common.sh@339 -- # ver1_l=2 00:13:24.809 14:59:48 -- scripts/common.sh@340 -- # ver2_l=1 00:13:24.809 14:59:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:24.809 14:59:48 -- scripts/common.sh@343 -- # case "$op" in 00:13:24.809 14:59:48 -- scripts/common.sh@344 -- # : 1 00:13:24.809 14:59:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:24.809 14:59:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.809 14:59:48 -- scripts/common.sh@364 -- # decimal 1 00:13:24.809 14:59:48 -- scripts/common.sh@352 -- # local d=1 00:13:24.809 14:59:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.809 14:59:48 -- scripts/common.sh@354 -- # echo 1 00:13:24.809 14:59:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:24.809 14:59:48 -- scripts/common.sh@365 -- # decimal 2 00:13:24.809 14:59:48 -- scripts/common.sh@352 -- # local d=2 00:13:24.809 14:59:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.809 14:59:48 -- scripts/common.sh@354 -- # echo 2 00:13:24.809 14:59:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:24.809 14:59:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:24.809 14:59:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:24.809 14:59:48 -- scripts/common.sh@367 -- # return 0 00:13:24.809 14:59:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.809 14:59:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:24.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.809 --rc genhtml_branch_coverage=1 00:13:24.809 --rc genhtml_function_coverage=1 00:13:24.809 --rc genhtml_legend=1 00:13:24.809 --rc geninfo_all_blocks=1 00:13:24.809 --rc geninfo_unexecuted_blocks=1 00:13:24.809 00:13:24.809 ' 00:13:24.809 14:59:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:24.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.809 --rc genhtml_branch_coverage=1 00:13:24.809 --rc genhtml_function_coverage=1 00:13:24.809 --rc genhtml_legend=1 00:13:24.809 --rc geninfo_all_blocks=1 00:13:24.809 --rc geninfo_unexecuted_blocks=1 00:13:24.809 00:13:24.809 ' 00:13:24.809 14:59:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:24.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.809 --rc genhtml_branch_coverage=1 00:13:24.809 --rc genhtml_function_coverage=1 00:13:24.809 --rc genhtml_legend=1 00:13:24.809 --rc geninfo_all_blocks=1 00:13:24.809 --rc geninfo_unexecuted_blocks=1 00:13:24.809 00:13:24.809 ' 00:13:24.809 14:59:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:24.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.809 --rc genhtml_branch_coverage=1 00:13:24.809 --rc genhtml_function_coverage=1 00:13:24.809 --rc genhtml_legend=1 00:13:24.809 --rc geninfo_all_blocks=1 00:13:24.809 --rc geninfo_unexecuted_blocks=1 00:13:24.809 00:13:24.809 ' 00:13:24.809 14:59:48 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:24.809 14:59:48 -- nvmf/common.sh@7 -- # uname -s 00:13:24.809 14:59:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.809 14:59:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.809 14:59:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.809 14:59:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.809 14:59:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.809 14:59:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.809 14:59:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.809 14:59:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.809 14:59:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.809 14:59:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.809 14:59:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:24.809 14:59:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:24.809 14:59:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.809 14:59:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.809 14:59:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:24.809 14:59:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.809 14:59:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.809 14:59:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.809 14:59:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.809 14:59:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.809 14:59:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.809 14:59:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.809 14:59:48 -- paths/export.sh@5 -- # export PATH 00:13:24.809 14:59:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.809 14:59:48 -- nvmf/common.sh@46 -- # : 0 00:13:24.809 14:59:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:24.809 14:59:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:24.809 14:59:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:24.809 14:59:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.809 14:59:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.809 14:59:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:24.809 14:59:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:24.809 14:59:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:24.809 14:59:48 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:24.809 14:59:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:24.809 14:59:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.809 14:59:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:24.809 14:59:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:24.809 14:59:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:24.809 14:59:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.809 14:59:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.809 14:59:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.809 14:59:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:24.809 14:59:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:24.809 14:59:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:24.809 14:59:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:24.809 14:59:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:24.809 14:59:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:24.809 14:59:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.809 14:59:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.809 14:59:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:24.809 14:59:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:24.809 14:59:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:24.809 14:59:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:24.809 14:59:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:24.809 14:59:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.809 14:59:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:24.809 14:59:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:24.809 14:59:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:24.809 14:59:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:24.809 14:59:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:24.809 14:59:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:24.809 Cannot find device "nvmf_tgt_br" 00:13:24.809 14:59:48 -- nvmf/common.sh@154 -- # true 00:13:24.810 14:59:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:24.810 Cannot find device "nvmf_tgt_br2" 00:13:24.810 14:59:48 -- nvmf/common.sh@155 -- # true 00:13:24.810 14:59:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:24.810 14:59:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:24.810 Cannot find device "nvmf_tgt_br" 00:13:24.810 14:59:48 -- nvmf/common.sh@157 -- # true 00:13:24.810 14:59:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:24.810 Cannot find device "nvmf_tgt_br2" 00:13:24.810 14:59:48 -- nvmf/common.sh@158 -- # true 00:13:24.810 14:59:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:24.810 14:59:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:24.810 14:59:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:24.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.810 14:59:48 -- nvmf/common.sh@161 -- # true 00:13:24.810 14:59:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:24.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:24.810 14:59:48 -- nvmf/common.sh@162 -- # true 00:13:24.810 14:59:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:24.810 14:59:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:24.810 14:59:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:24.810 14:59:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:24.810 14:59:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:24.810 14:59:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:24.810 14:59:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:24.810 14:59:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:24.810 14:59:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:24.810 14:59:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:24.810 14:59:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:24.810 14:59:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:24.810 14:59:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:25.069 14:59:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:25.069 14:59:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:25.069 14:59:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:25.069 14:59:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:25.069 14:59:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:25.069 14:59:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:25.069 14:59:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:25.069 14:59:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:25.069 14:59:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:25.069 14:59:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:25.069 14:59:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:25.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:13:25.069 00:13:25.069 --- 10.0.0.2 ping statistics --- 00:13:25.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.069 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:25.069 14:59:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:25.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:25.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:13:25.069 00:13:25.069 --- 10.0.0.3 ping statistics --- 00:13:25.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.069 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:25.069 14:59:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:25.069 00:13:25.069 --- 10.0.0.1 ping statistics --- 00:13:25.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.069 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:25.069 14:59:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.069 14:59:48 -- nvmf/common.sh@421 -- # return 0 00:13:25.069 14:59:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:25.069 14:59:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.069 14:59:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:25.069 14:59:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:25.069 14:59:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.069 14:59:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:25.069 14:59:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:25.069 14:59:48 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:25.069 14:59:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:25.069 14:59:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.069 14:59:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.069 14:59:48 -- nvmf/common.sh@469 -- # nvmfpid=81927 00:13:25.069 14:59:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:25.069 14:59:48 -- nvmf/common.sh@470 -- # waitforlisten 81927 00:13:25.069 14:59:48 -- common/autotest_common.sh@829 -- # '[' -z 81927 ']' 00:13:25.069 14:59:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.069 14:59:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.069 14:59:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.069 14:59:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.069 14:59:48 -- common/autotest_common.sh@10 -- # set +x 00:13:25.069 [2024-11-18 14:59:48.594701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:25.069 [2024-11-18 14:59:48.594785] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.328 [2024-11-18 14:59:48.734552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.328 [2024-11-18 14:59:48.793655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:25.328 [2024-11-18 14:59:48.794129] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.328 [2024-11-18 14:59:48.794188] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.328 [2024-11-18 14:59:48.794467] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.328 [2024-11-18 14:59:48.794611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.328 [2024-11-18 14:59:48.795127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.328 [2024-11-18 14:59:48.795139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.264 14:59:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.264 14:59:49 -- common/autotest_common.sh@862 -- # return 0 00:13:26.264 14:59:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:26.264 14:59:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.264 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.264 14:59:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.264 14:59:49 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.264 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.264 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.264 [2024-11-18 14:59:49.653102] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.264 14:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.264 14:59:49 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.264 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.264 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.264 14:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.264 14:59:49 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.264 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.264 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.264 [2024-11-18 14:59:49.670951] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.264 14:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.264 14:59:49 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.264 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.264 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.264 NULL1 00:13:26.264 14:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.264 14:59:49 -- target/connect_stress.sh@21 -- # PERF_PID=81986 00:13:26.264 14:59:49 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:26.264 14:59:49 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:26.264 14:59:49 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.264 14:59:49 -- target/connect_stress.sh@28 -- # cat 00:13:26.264 14:59:49 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:26.264 14:59:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.264 14:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.264 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:13:26.523 14:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.523 14:59:50 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:26.523 14:59:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.523 14:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.523 14:59:50 -- common/autotest_common.sh@10 -- # set +x 00:13:27.090 14:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.090 14:59:50 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:27.090 14:59:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.090 14:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.090 14:59:50 -- common/autotest_common.sh@10 -- # set +x 00:13:27.349 14:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.349 14:59:50 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:27.349 14:59:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.349 14:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.349 14:59:50 -- common/autotest_common.sh@10 -- # set +x 00:13:27.608 14:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.608 14:59:51 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:27.608 14:59:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.608 14:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.608 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:27.866 14:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.866 14:59:51 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:27.866 14:59:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.866 14:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.866 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:28.124 14:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.124 14:59:51 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:28.124 14:59:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.124 14:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.124 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:13:28.692 14:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.692 14:59:52 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:28.692 14:59:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.692 14:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.692 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:28.951 14:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.951 14:59:52 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:28.951 14:59:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.951 14:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.951 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.210 14:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.210 14:59:52 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:29.210 14:59:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.210 14:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.210 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.469 14:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.469 14:59:52 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:29.469 14:59:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.469 14:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.469 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:13:29.728 14:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.728 14:59:53 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:29.728 14:59:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.728 14:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.728 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.296 14:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.296 14:59:53 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:30.296 14:59:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.296 14:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.296 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.554 14:59:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.554 14:59:53 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:30.554 14:59:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.554 14:59:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.554 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:13:30.813 14:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.813 14:59:54 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:30.813 14:59:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.813 14:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.813 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.072 14:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.072 14:59:54 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:31.072 14:59:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.072 14:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.072 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.642 14:59:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.642 14:59:54 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:31.642 14:59:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.642 14:59:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.642 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.901 14:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.901 14:59:55 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:31.901 14:59:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.901 14:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.901 14:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:32.161 14:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.161 14:59:55 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:32.161 14:59:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.161 14:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.161 14:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:32.420 14:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.420 14:59:55 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:32.420 14:59:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.420 14:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.420 14:59:55 -- common/autotest_common.sh@10 -- # set +x 00:13:32.679 14:59:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.679 14:59:56 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:32.679 14:59:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.679 14:59:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.679 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:13:33.248 14:59:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.248 14:59:56 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:33.248 14:59:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.248 14:59:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.248 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:13:33.507 14:59:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.507 14:59:56 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:33.507 14:59:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.507 14:59:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.507 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:13:33.766 14:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.766 14:59:57 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:33.766 14:59:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.766 14:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.766 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:34.025 14:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.025 14:59:57 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:34.025 14:59:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.026 14:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.026 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:34.285 14:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.285 14:59:57 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:34.285 14:59:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.285 14:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.285 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:13:34.854 14:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.854 14:59:58 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:34.854 14:59:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.854 14:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.854 14:59:58 -- common/autotest_common.sh@10 -- # set +x 00:13:35.113 14:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.113 14:59:58 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:35.113 14:59:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.113 14:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.113 14:59:58 -- common/autotest_common.sh@10 -- # set +x 00:13:35.371 14:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.371 14:59:58 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:35.371 14:59:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.371 14:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.371 14:59:58 -- common/autotest_common.sh@10 -- # set +x 00:13:35.630 14:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.630 14:59:59 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:35.630 14:59:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.630 14:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.630 14:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:35.894 14:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.894 14:59:59 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:35.894 14:59:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.894 14:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.894 14:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.520 14:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.520 14:59:59 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:36.520 14:59:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.520 14:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.520 14:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:36.520 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:36.791 15:00:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.791 15:00:00 -- target/connect_stress.sh@34 -- # kill -0 81986 00:13:36.791 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81986) - No such process 00:13:36.791 15:00:00 -- target/connect_stress.sh@38 -- # wait 81986 00:13:36.791 15:00:00 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:36.791 15:00:00 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:36.791 15:00:00 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:36.791 15:00:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:36.791 15:00:00 -- nvmf/common.sh@116 -- # sync 00:13:36.791 15:00:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:36.791 15:00:00 -- nvmf/common.sh@119 -- # set +e 00:13:36.791 15:00:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:36.791 15:00:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:36.791 rmmod nvme_tcp 00:13:36.791 rmmod nvme_fabrics 00:13:36.791 rmmod nvme_keyring 00:13:36.791 15:00:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:36.791 15:00:00 -- nvmf/common.sh@123 -- # set -e 00:13:36.791 15:00:00 -- nvmf/common.sh@124 -- # return 0 00:13:36.791 15:00:00 -- nvmf/common.sh@477 -- # '[' -n 81927 ']' 00:13:36.791 15:00:00 -- nvmf/common.sh@478 -- # killprocess 81927 00:13:36.791 15:00:00 -- common/autotest_common.sh@936 -- # '[' -z 81927 ']' 00:13:36.791 15:00:00 -- common/autotest_common.sh@940 -- # kill -0 81927 00:13:36.791 15:00:00 -- common/autotest_common.sh@941 -- # uname 00:13:36.791 15:00:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.791 15:00:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81927 00:13:36.791 15:00:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:36.791 killing process with pid 81927 00:13:36.791 15:00:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:36.791 15:00:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81927' 00:13:36.791 15:00:00 -- common/autotest_common.sh@955 -- # kill 81927 00:13:36.791 15:00:00 -- common/autotest_common.sh@960 -- # wait 81927 00:13:37.050 15:00:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:37.050 15:00:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:37.050 15:00:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:37.050 15:00:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.050 15:00:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:37.050 15:00:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.050 15:00:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.050 15:00:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.050 15:00:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:37.050 ************************************ 00:13:37.050 END TEST nvmf_connect_stress 00:13:37.050 ************************************ 00:13:37.050 00:13:37.050 real 0m12.503s 00:13:37.050 user 0m41.904s 00:13:37.050 sys 0m2.980s 00:13:37.050 15:00:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:37.050 15:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.050 15:00:00 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:37.050 15:00:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:37.050 15:00:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.050 15:00:00 -- common/autotest_common.sh@10 -- # set +x 00:13:37.050 ************************************ 00:13:37.050 START TEST nvmf_fused_ordering 00:13:37.050 ************************************ 00:13:37.050 15:00:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:37.050 * Looking for test storage... 00:13:37.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:37.050 15:00:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:37.050 15:00:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:37.050 15:00:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:37.310 15:00:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:37.310 15:00:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:37.310 15:00:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:37.310 15:00:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:37.310 15:00:00 -- scripts/common.sh@335 -- # IFS=.-: 00:13:37.310 15:00:00 -- scripts/common.sh@335 -- # read -ra ver1 00:13:37.310 15:00:00 -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.310 15:00:00 -- scripts/common.sh@336 -- # read -ra ver2 00:13:37.310 15:00:00 -- scripts/common.sh@337 -- # local 'op=<' 00:13:37.310 15:00:00 -- scripts/common.sh@339 -- # ver1_l=2 00:13:37.310 15:00:00 -- scripts/common.sh@340 -- # ver2_l=1 00:13:37.310 15:00:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:37.310 15:00:00 -- scripts/common.sh@343 -- # case "$op" in 00:13:37.310 15:00:00 -- scripts/common.sh@344 -- # : 1 00:13:37.310 15:00:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:37.310 15:00:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.310 15:00:00 -- scripts/common.sh@364 -- # decimal 1 00:13:37.310 15:00:00 -- scripts/common.sh@352 -- # local d=1 00:13:37.310 15:00:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.310 15:00:00 -- scripts/common.sh@354 -- # echo 1 00:13:37.310 15:00:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:37.310 15:00:00 -- scripts/common.sh@365 -- # decimal 2 00:13:37.310 15:00:00 -- scripts/common.sh@352 -- # local d=2 00:13:37.310 15:00:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.310 15:00:00 -- scripts/common.sh@354 -- # echo 2 00:13:37.310 15:00:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:37.310 15:00:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:37.310 15:00:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:37.310 15:00:00 -- scripts/common.sh@367 -- # return 0 00:13:37.310 15:00:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.310 15:00:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:37.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.310 --rc genhtml_branch_coverage=1 00:13:37.310 --rc genhtml_function_coverage=1 00:13:37.310 --rc genhtml_legend=1 00:13:37.310 --rc geninfo_all_blocks=1 00:13:37.310 --rc geninfo_unexecuted_blocks=1 00:13:37.310 00:13:37.310 ' 00:13:37.310 15:00:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:37.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.310 --rc genhtml_branch_coverage=1 00:13:37.310 --rc genhtml_function_coverage=1 00:13:37.310 --rc genhtml_legend=1 00:13:37.310 --rc geninfo_all_blocks=1 00:13:37.310 --rc geninfo_unexecuted_blocks=1 00:13:37.310 00:13:37.310 ' 00:13:37.310 15:00:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:37.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.310 --rc genhtml_branch_coverage=1 00:13:37.310 --rc genhtml_function_coverage=1 00:13:37.310 --rc genhtml_legend=1 00:13:37.310 --rc geninfo_all_blocks=1 00:13:37.310 --rc geninfo_unexecuted_blocks=1 00:13:37.310 00:13:37.310 ' 00:13:37.310 15:00:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:37.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.310 --rc genhtml_branch_coverage=1 00:13:37.310 --rc genhtml_function_coverage=1 00:13:37.310 --rc genhtml_legend=1 00:13:37.310 --rc geninfo_all_blocks=1 00:13:37.310 --rc geninfo_unexecuted_blocks=1 00:13:37.310 00:13:37.310 ' 00:13:37.310 15:00:00 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:37.310 15:00:00 -- nvmf/common.sh@7 -- # uname -s 00:13:37.310 15:00:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.310 15:00:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.310 15:00:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.310 15:00:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.310 15:00:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.310 15:00:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.310 15:00:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.310 15:00:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.310 15:00:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.310 15:00:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.310 15:00:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:37.310 15:00:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:37.310 15:00:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.310 15:00:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.310 15:00:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:37.310 15:00:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.310 15:00:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.310 15:00:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.310 15:00:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.310 15:00:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.310 15:00:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.310 15:00:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.310 15:00:00 -- paths/export.sh@5 -- # export PATH 00:13:37.310 15:00:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.310 15:00:00 -- nvmf/common.sh@46 -- # : 0 00:13:37.310 15:00:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:37.310 15:00:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:37.310 15:00:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:37.310 15:00:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.310 15:00:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.310 15:00:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:37.310 15:00:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:37.310 15:00:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:37.310 15:00:00 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:37.310 15:00:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:37.310 15:00:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.311 15:00:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:37.311 15:00:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:37.311 15:00:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:37.311 15:00:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.311 15:00:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.311 15:00:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.311 15:00:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:37.311 15:00:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:37.311 15:00:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:37.311 15:00:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:37.311 15:00:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:37.311 15:00:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:37.311 15:00:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.311 15:00:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.311 15:00:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:37.311 15:00:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:37.311 15:00:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:37.311 15:00:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:37.311 15:00:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:37.311 15:00:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.311 15:00:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:37.311 15:00:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:37.311 15:00:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:37.311 15:00:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:37.311 15:00:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:37.311 15:00:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:37.311 Cannot find device "nvmf_tgt_br" 00:13:37.311 15:00:00 -- nvmf/common.sh@154 -- # true 00:13:37.311 15:00:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:37.311 Cannot find device "nvmf_tgt_br2" 00:13:37.311 15:00:00 -- nvmf/common.sh@155 -- # true 00:13:37.311 15:00:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:37.311 15:00:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:37.311 Cannot find device "nvmf_tgt_br" 00:13:37.311 15:00:00 -- nvmf/common.sh@157 -- # true 00:13:37.311 15:00:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:37.311 Cannot find device "nvmf_tgt_br2" 00:13:37.311 15:00:00 -- nvmf/common.sh@158 -- # true 00:13:37.311 15:00:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:37.311 15:00:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:37.311 15:00:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:37.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.570 15:00:00 -- nvmf/common.sh@161 -- # true 00:13:37.570 15:00:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:37.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:37.570 15:00:00 -- nvmf/common.sh@162 -- # true 00:13:37.570 15:00:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:37.570 15:00:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:37.570 15:00:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:37.570 15:00:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:37.570 15:00:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:37.570 15:00:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:37.570 15:00:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:37.570 15:00:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:37.570 15:00:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:37.570 15:00:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:37.570 15:00:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:37.570 15:00:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:37.570 15:00:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:37.570 15:00:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:37.570 15:00:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:37.570 15:00:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:37.570 15:00:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:37.570 15:00:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:37.570 15:00:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:37.570 15:00:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:37.570 15:00:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:37.570 15:00:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:37.570 15:00:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:37.570 15:00:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:37.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:13:37.570 00:13:37.571 --- 10.0.0.2 ping statistics --- 00:13:37.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.571 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:37.571 15:00:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:37.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:37.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:13:37.571 00:13:37.571 --- 10.0.0.3 ping statistics --- 00:13:37.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.571 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:37.571 15:00:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:37.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:37.571 00:13:37.571 --- 10.0.0.1 ping statistics --- 00:13:37.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.571 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:37.571 15:00:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.571 15:00:01 -- nvmf/common.sh@421 -- # return 0 00:13:37.571 15:00:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:37.571 15:00:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.571 15:00:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:37.571 15:00:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:37.571 15:00:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.571 15:00:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:37.571 15:00:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:37.571 15:00:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:37.571 15:00:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:37.571 15:00:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:37.571 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:13:37.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.571 15:00:01 -- nvmf/common.sh@469 -- # nvmfpid=82318 00:13:37.571 15:00:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:37.571 15:00:01 -- nvmf/common.sh@470 -- # waitforlisten 82318 00:13:37.571 15:00:01 -- common/autotest_common.sh@829 -- # '[' -z 82318 ']' 00:13:37.571 15:00:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.571 15:00:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.571 15:00:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.571 15:00:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.571 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:13:37.830 [2024-11-18 15:00:01.161547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:37.830 [2024-11-18 15:00:01.161617] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.830 [2024-11-18 15:00:01.297454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.830 [2024-11-18 15:00:01.355579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:37.830 [2024-11-18 15:00:01.356077] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.830 [2024-11-18 15:00:01.356102] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.830 [2024-11-18 15:00:01.356113] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.830 [2024-11-18 15:00:01.356146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.765 15:00:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.765 15:00:02 -- common/autotest_common.sh@862 -- # return 0 00:13:38.766 15:00:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:38.766 15:00:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:38.766 15:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:38.766 15:00:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.766 15:00:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.766 15:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.766 15:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:38.766 [2024-11-18 15:00:02.285174] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.766 15:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.766 15:00:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.766 15:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.766 15:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:38.766 15:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.766 15:00:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.766 15:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.766 15:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:38.766 [2024-11-18 15:00:02.301294] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.766 15:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.766 15:00:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:38.766 15:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.766 15:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:38.766 NULL1 00:13:38.766 15:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.766 15:00:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:38.766 15:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.766 15:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:38.766 15:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.766 15:00:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:38.766 15:00:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.766 15:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:38.766 15:00:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.766 15:00:02 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:39.024 [2024-11-18 15:00:02.351127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:39.024 [2024-11-18 15:00:02.351160] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82368 ] 00:13:39.283 Attached to nqn.2016-06.io.spdk:cnode1 00:13:39.283 Namespace ID: 1 size: 1GB 00:13:39.283 fused_ordering(0) 00:13:39.283 fused_ordering(1) 00:13:39.283 fused_ordering(2) 00:13:39.283 fused_ordering(3) 00:13:39.283 fused_ordering(4) 00:13:39.283 fused_ordering(5) 00:13:39.283 fused_ordering(6) 00:13:39.283 fused_ordering(7) 00:13:39.283 fused_ordering(8) 00:13:39.283 fused_ordering(9) 00:13:39.283 fused_ordering(10) 00:13:39.283 fused_ordering(11) 00:13:39.283 fused_ordering(12) 00:13:39.283 fused_ordering(13) 00:13:39.283 fused_ordering(14) 00:13:39.283 fused_ordering(15) 00:13:39.283 fused_ordering(16) 00:13:39.283 fused_ordering(17) 00:13:39.283 fused_ordering(18) 00:13:39.283 fused_ordering(19) 00:13:39.283 fused_ordering(20) 00:13:39.283 fused_ordering(21) 00:13:39.283 fused_ordering(22) 00:13:39.283 fused_ordering(23) 00:13:39.283 fused_ordering(24) 00:13:39.283 fused_ordering(25) 00:13:39.283 fused_ordering(26) 00:13:39.283 fused_ordering(27) 00:13:39.283 fused_ordering(28) 00:13:39.283 fused_ordering(29) 00:13:39.283 fused_ordering(30) 00:13:39.283 fused_ordering(31) 00:13:39.283 fused_ordering(32) 00:13:39.283 fused_ordering(33) 00:13:39.283 fused_ordering(34) 00:13:39.283 fused_ordering(35) 00:13:39.283 fused_ordering(36) 00:13:39.283 fused_ordering(37) 00:13:39.283 fused_ordering(38) 00:13:39.283 fused_ordering(39) 00:13:39.283 fused_ordering(40) 00:13:39.283 fused_ordering(41) 00:13:39.283 fused_ordering(42) 00:13:39.283 fused_ordering(43) 00:13:39.283 fused_ordering(44) 00:13:39.283 fused_ordering(45) 00:13:39.283 fused_ordering(46) 00:13:39.283 fused_ordering(47) 00:13:39.283 fused_ordering(48) 00:13:39.283 fused_ordering(49) 00:13:39.283 fused_ordering(50) 00:13:39.283 fused_ordering(51) 00:13:39.283 fused_ordering(52) 00:13:39.283 fused_ordering(53) 00:13:39.283 fused_ordering(54) 00:13:39.283 fused_ordering(55) 00:13:39.283 fused_ordering(56) 00:13:39.283 fused_ordering(57) 00:13:39.283 fused_ordering(58) 00:13:39.283 fused_ordering(59) 00:13:39.283 fused_ordering(60) 00:13:39.283 fused_ordering(61) 00:13:39.283 fused_ordering(62) 00:13:39.283 fused_ordering(63) 00:13:39.283 fused_ordering(64) 00:13:39.283 fused_ordering(65) 00:13:39.283 fused_ordering(66) 00:13:39.283 fused_ordering(67) 00:13:39.283 fused_ordering(68) 00:13:39.283 fused_ordering(69) 00:13:39.283 fused_ordering(70) 00:13:39.283 fused_ordering(71) 00:13:39.283 fused_ordering(72) 00:13:39.283 fused_ordering(73) 00:13:39.283 fused_ordering(74) 00:13:39.283 fused_ordering(75) 00:13:39.283 fused_ordering(76) 00:13:39.283 fused_ordering(77) 00:13:39.283 fused_ordering(78) 00:13:39.283 fused_ordering(79) 00:13:39.283 fused_ordering(80) 00:13:39.283 fused_ordering(81) 00:13:39.283 fused_ordering(82) 00:13:39.283 fused_ordering(83) 00:13:39.283 fused_ordering(84) 00:13:39.283 fused_ordering(85) 00:13:39.283 fused_ordering(86) 00:13:39.283 fused_ordering(87) 00:13:39.283 fused_ordering(88) 00:13:39.283 fused_ordering(89) 00:13:39.283 fused_ordering(90) 00:13:39.283 fused_ordering(91) 00:13:39.283 fused_ordering(92) 00:13:39.283 fused_ordering(93) 00:13:39.283 fused_ordering(94) 00:13:39.283 fused_ordering(95) 00:13:39.283 fused_ordering(96) 00:13:39.283 fused_ordering(97) 00:13:39.283 fused_ordering(98) 00:13:39.283 fused_ordering(99) 00:13:39.283 fused_ordering(100) 00:13:39.283 fused_ordering(101) 00:13:39.283 fused_ordering(102) 00:13:39.283 fused_ordering(103) 00:13:39.283 fused_ordering(104) 00:13:39.283 fused_ordering(105) 00:13:39.283 fused_ordering(106) 00:13:39.283 fused_ordering(107) 00:13:39.283 fused_ordering(108) 00:13:39.283 fused_ordering(109) 00:13:39.283 fused_ordering(110) 00:13:39.283 fused_ordering(111) 00:13:39.283 fused_ordering(112) 00:13:39.283 fused_ordering(113) 00:13:39.283 fused_ordering(114) 00:13:39.283 fused_ordering(115) 00:13:39.284 fused_ordering(116) 00:13:39.284 fused_ordering(117) 00:13:39.284 fused_ordering(118) 00:13:39.284 fused_ordering(119) 00:13:39.284 fused_ordering(120) 00:13:39.284 fused_ordering(121) 00:13:39.284 fused_ordering(122) 00:13:39.284 fused_ordering(123) 00:13:39.284 fused_ordering(124) 00:13:39.284 fused_ordering(125) 00:13:39.284 fused_ordering(126) 00:13:39.284 fused_ordering(127) 00:13:39.284 fused_ordering(128) 00:13:39.284 fused_ordering(129) 00:13:39.284 fused_ordering(130) 00:13:39.284 fused_ordering(131) 00:13:39.284 fused_ordering(132) 00:13:39.284 fused_ordering(133) 00:13:39.284 fused_ordering(134) 00:13:39.284 fused_ordering(135) 00:13:39.284 fused_ordering(136) 00:13:39.284 fused_ordering(137) 00:13:39.284 fused_ordering(138) 00:13:39.284 fused_ordering(139) 00:13:39.284 fused_ordering(140) 00:13:39.284 fused_ordering(141) 00:13:39.284 fused_ordering(142) 00:13:39.284 fused_ordering(143) 00:13:39.284 fused_ordering(144) 00:13:39.284 fused_ordering(145) 00:13:39.284 fused_ordering(146) 00:13:39.284 fused_ordering(147) 00:13:39.284 fused_ordering(148) 00:13:39.284 fused_ordering(149) 00:13:39.284 fused_ordering(150) 00:13:39.284 fused_ordering(151) 00:13:39.284 fused_ordering(152) 00:13:39.284 fused_ordering(153) 00:13:39.284 fused_ordering(154) 00:13:39.284 fused_ordering(155) 00:13:39.284 fused_ordering(156) 00:13:39.284 fused_ordering(157) 00:13:39.284 fused_ordering(158) 00:13:39.284 fused_ordering(159) 00:13:39.284 fused_ordering(160) 00:13:39.284 fused_ordering(161) 00:13:39.284 fused_ordering(162) 00:13:39.284 fused_ordering(163) 00:13:39.284 fused_ordering(164) 00:13:39.284 fused_ordering(165) 00:13:39.284 fused_ordering(166) 00:13:39.284 fused_ordering(167) 00:13:39.284 fused_ordering(168) 00:13:39.284 fused_ordering(169) 00:13:39.284 fused_ordering(170) 00:13:39.284 fused_ordering(171) 00:13:39.284 fused_ordering(172) 00:13:39.284 fused_ordering(173) 00:13:39.284 fused_ordering(174) 00:13:39.284 fused_ordering(175) 00:13:39.284 fused_ordering(176) 00:13:39.284 fused_ordering(177) 00:13:39.284 fused_ordering(178) 00:13:39.284 fused_ordering(179) 00:13:39.284 fused_ordering(180) 00:13:39.284 fused_ordering(181) 00:13:39.284 fused_ordering(182) 00:13:39.284 fused_ordering(183) 00:13:39.284 fused_ordering(184) 00:13:39.284 fused_ordering(185) 00:13:39.284 fused_ordering(186) 00:13:39.284 fused_ordering(187) 00:13:39.284 fused_ordering(188) 00:13:39.284 fused_ordering(189) 00:13:39.284 fused_ordering(190) 00:13:39.284 fused_ordering(191) 00:13:39.284 fused_ordering(192) 00:13:39.284 fused_ordering(193) 00:13:39.284 fused_ordering(194) 00:13:39.284 fused_ordering(195) 00:13:39.284 fused_ordering(196) 00:13:39.284 fused_ordering(197) 00:13:39.284 fused_ordering(198) 00:13:39.284 fused_ordering(199) 00:13:39.284 fused_ordering(200) 00:13:39.284 fused_ordering(201) 00:13:39.284 fused_ordering(202) 00:13:39.284 fused_ordering(203) 00:13:39.284 fused_ordering(204) 00:13:39.284 fused_ordering(205) 00:13:39.543 fused_ordering(206) 00:13:39.543 fused_ordering(207) 00:13:39.543 fused_ordering(208) 00:13:39.543 fused_ordering(209) 00:13:39.543 fused_ordering(210) 00:13:39.543 fused_ordering(211) 00:13:39.543 fused_ordering(212) 00:13:39.543 fused_ordering(213) 00:13:39.543 fused_ordering(214) 00:13:39.543 fused_ordering(215) 00:13:39.543 fused_ordering(216) 00:13:39.543 fused_ordering(217) 00:13:39.543 fused_ordering(218) 00:13:39.543 fused_ordering(219) 00:13:39.543 fused_ordering(220) 00:13:39.543 fused_ordering(221) 00:13:39.543 fused_ordering(222) 00:13:39.543 fused_ordering(223) 00:13:39.543 fused_ordering(224) 00:13:39.543 fused_ordering(225) 00:13:39.543 fused_ordering(226) 00:13:39.543 fused_ordering(227) 00:13:39.543 fused_ordering(228) 00:13:39.543 fused_ordering(229) 00:13:39.543 fused_ordering(230) 00:13:39.543 fused_ordering(231) 00:13:39.543 fused_ordering(232) 00:13:39.543 fused_ordering(233) 00:13:39.543 fused_ordering(234) 00:13:39.543 fused_ordering(235) 00:13:39.543 fused_ordering(236) 00:13:39.543 fused_ordering(237) 00:13:39.543 fused_ordering(238) 00:13:39.543 fused_ordering(239) 00:13:39.543 fused_ordering(240) 00:13:39.543 fused_ordering(241) 00:13:39.543 fused_ordering(242) 00:13:39.543 fused_ordering(243) 00:13:39.543 fused_ordering(244) 00:13:39.543 fused_ordering(245) 00:13:39.543 fused_ordering(246) 00:13:39.543 fused_ordering(247) 00:13:39.543 fused_ordering(248) 00:13:39.543 fused_ordering(249) 00:13:39.543 fused_ordering(250) 00:13:39.543 fused_ordering(251) 00:13:39.543 fused_ordering(252) 00:13:39.543 fused_ordering(253) 00:13:39.543 fused_ordering(254) 00:13:39.543 fused_ordering(255) 00:13:39.543 fused_ordering(256) 00:13:39.543 fused_ordering(257) 00:13:39.543 fused_ordering(258) 00:13:39.543 fused_ordering(259) 00:13:39.543 fused_ordering(260) 00:13:39.543 fused_ordering(261) 00:13:39.543 fused_ordering(262) 00:13:39.543 fused_ordering(263) 00:13:39.543 fused_ordering(264) 00:13:39.543 fused_ordering(265) 00:13:39.543 fused_ordering(266) 00:13:39.543 fused_ordering(267) 00:13:39.543 fused_ordering(268) 00:13:39.543 fused_ordering(269) 00:13:39.543 fused_ordering(270) 00:13:39.543 fused_ordering(271) 00:13:39.543 fused_ordering(272) 00:13:39.543 fused_ordering(273) 00:13:39.543 fused_ordering(274) 00:13:39.543 fused_ordering(275) 00:13:39.543 fused_ordering(276) 00:13:39.543 fused_ordering(277) 00:13:39.543 fused_ordering(278) 00:13:39.543 fused_ordering(279) 00:13:39.543 fused_ordering(280) 00:13:39.543 fused_ordering(281) 00:13:39.543 fused_ordering(282) 00:13:39.543 fused_ordering(283) 00:13:39.543 fused_ordering(284) 00:13:39.543 fused_ordering(285) 00:13:39.543 fused_ordering(286) 00:13:39.543 fused_ordering(287) 00:13:39.543 fused_ordering(288) 00:13:39.543 fused_ordering(289) 00:13:39.543 fused_ordering(290) 00:13:39.543 fused_ordering(291) 00:13:39.543 fused_ordering(292) 00:13:39.543 fused_ordering(293) 00:13:39.543 fused_ordering(294) 00:13:39.543 fused_ordering(295) 00:13:39.543 fused_ordering(296) 00:13:39.543 fused_ordering(297) 00:13:39.543 fused_ordering(298) 00:13:39.543 fused_ordering(299) 00:13:39.543 fused_ordering(300) 00:13:39.543 fused_ordering(301) 00:13:39.543 fused_ordering(302) 00:13:39.543 fused_ordering(303) 00:13:39.543 fused_ordering(304) 00:13:39.543 fused_ordering(305) 00:13:39.543 fused_ordering(306) 00:13:39.543 fused_ordering(307) 00:13:39.543 fused_ordering(308) 00:13:39.543 fused_ordering(309) 00:13:39.543 fused_ordering(310) 00:13:39.543 fused_ordering(311) 00:13:39.543 fused_ordering(312) 00:13:39.543 fused_ordering(313) 00:13:39.543 fused_ordering(314) 00:13:39.543 fused_ordering(315) 00:13:39.543 fused_ordering(316) 00:13:39.543 fused_ordering(317) 00:13:39.543 fused_ordering(318) 00:13:39.543 fused_ordering(319) 00:13:39.543 fused_ordering(320) 00:13:39.543 fused_ordering(321) 00:13:39.543 fused_ordering(322) 00:13:39.543 fused_ordering(323) 00:13:39.543 fused_ordering(324) 00:13:39.543 fused_ordering(325) 00:13:39.543 fused_ordering(326) 00:13:39.543 fused_ordering(327) 00:13:39.543 fused_ordering(328) 00:13:39.543 fused_ordering(329) 00:13:39.543 fused_ordering(330) 00:13:39.543 fused_ordering(331) 00:13:39.543 fused_ordering(332) 00:13:39.543 fused_ordering(333) 00:13:39.543 fused_ordering(334) 00:13:39.543 fused_ordering(335) 00:13:39.543 fused_ordering(336) 00:13:39.543 fused_ordering(337) 00:13:39.543 fused_ordering(338) 00:13:39.543 fused_ordering(339) 00:13:39.543 fused_ordering(340) 00:13:39.543 fused_ordering(341) 00:13:39.543 fused_ordering(342) 00:13:39.543 fused_ordering(343) 00:13:39.543 fused_ordering(344) 00:13:39.543 fused_ordering(345) 00:13:39.543 fused_ordering(346) 00:13:39.543 fused_ordering(347) 00:13:39.543 fused_ordering(348) 00:13:39.543 fused_ordering(349) 00:13:39.543 fused_ordering(350) 00:13:39.543 fused_ordering(351) 00:13:39.543 fused_ordering(352) 00:13:39.543 fused_ordering(353) 00:13:39.543 fused_ordering(354) 00:13:39.543 fused_ordering(355) 00:13:39.543 fused_ordering(356) 00:13:39.543 fused_ordering(357) 00:13:39.543 fused_ordering(358) 00:13:39.543 fused_ordering(359) 00:13:39.543 fused_ordering(360) 00:13:39.543 fused_ordering(361) 00:13:39.543 fused_ordering(362) 00:13:39.543 fused_ordering(363) 00:13:39.543 fused_ordering(364) 00:13:39.543 fused_ordering(365) 00:13:39.543 fused_ordering(366) 00:13:39.543 fused_ordering(367) 00:13:39.543 fused_ordering(368) 00:13:39.543 fused_ordering(369) 00:13:39.543 fused_ordering(370) 00:13:39.543 fused_ordering(371) 00:13:39.543 fused_ordering(372) 00:13:39.543 fused_ordering(373) 00:13:39.543 fused_ordering(374) 00:13:39.543 fused_ordering(375) 00:13:39.543 fused_ordering(376) 00:13:39.544 fused_ordering(377) 00:13:39.544 fused_ordering(378) 00:13:39.544 fused_ordering(379) 00:13:39.544 fused_ordering(380) 00:13:39.544 fused_ordering(381) 00:13:39.544 fused_ordering(382) 00:13:39.544 fused_ordering(383) 00:13:39.544 fused_ordering(384) 00:13:39.544 fused_ordering(385) 00:13:39.544 fused_ordering(386) 00:13:39.544 fused_ordering(387) 00:13:39.544 fused_ordering(388) 00:13:39.544 fused_ordering(389) 00:13:39.544 fused_ordering(390) 00:13:39.544 fused_ordering(391) 00:13:39.544 fused_ordering(392) 00:13:39.544 fused_ordering(393) 00:13:39.544 fused_ordering(394) 00:13:39.544 fused_ordering(395) 00:13:39.544 fused_ordering(396) 00:13:39.544 fused_ordering(397) 00:13:39.544 fused_ordering(398) 00:13:39.544 fused_ordering(399) 00:13:39.544 fused_ordering(400) 00:13:39.544 fused_ordering(401) 00:13:39.544 fused_ordering(402) 00:13:39.544 fused_ordering(403) 00:13:39.544 fused_ordering(404) 00:13:39.544 fused_ordering(405) 00:13:39.544 fused_ordering(406) 00:13:39.544 fused_ordering(407) 00:13:39.544 fused_ordering(408) 00:13:39.544 fused_ordering(409) 00:13:39.544 fused_ordering(410) 00:13:39.801 fused_ordering(411) 00:13:39.801 fused_ordering(412) 00:13:39.801 fused_ordering(413) 00:13:39.801 fused_ordering(414) 00:13:39.801 fused_ordering(415) 00:13:39.801 fused_ordering(416) 00:13:39.801 fused_ordering(417) 00:13:39.801 fused_ordering(418) 00:13:39.801 fused_ordering(419) 00:13:39.801 fused_ordering(420) 00:13:39.801 fused_ordering(421) 00:13:39.801 fused_ordering(422) 00:13:39.801 fused_ordering(423) 00:13:39.801 fused_ordering(424) 00:13:39.801 fused_ordering(425) 00:13:39.801 fused_ordering(426) 00:13:39.801 fused_ordering(427) 00:13:39.801 fused_ordering(428) 00:13:39.802 fused_ordering(429) 00:13:39.802 fused_ordering(430) 00:13:39.802 fused_ordering(431) 00:13:39.802 fused_ordering(432) 00:13:39.802 fused_ordering(433) 00:13:39.802 fused_ordering(434) 00:13:39.802 fused_ordering(435) 00:13:39.802 fused_ordering(436) 00:13:39.802 fused_ordering(437) 00:13:39.802 fused_ordering(438) 00:13:39.802 fused_ordering(439) 00:13:39.802 fused_ordering(440) 00:13:39.802 fused_ordering(441) 00:13:39.802 fused_ordering(442) 00:13:39.802 fused_ordering(443) 00:13:39.802 fused_ordering(444) 00:13:39.802 fused_ordering(445) 00:13:39.802 fused_ordering(446) 00:13:39.802 fused_ordering(447) 00:13:39.802 fused_ordering(448) 00:13:39.802 fused_ordering(449) 00:13:39.802 fused_ordering(450) 00:13:39.802 fused_ordering(451) 00:13:39.802 fused_ordering(452) 00:13:39.802 fused_ordering(453) 00:13:39.802 fused_ordering(454) 00:13:39.802 fused_ordering(455) 00:13:39.802 fused_ordering(456) 00:13:39.802 fused_ordering(457) 00:13:39.802 fused_ordering(458) 00:13:39.802 fused_ordering(459) 00:13:39.802 fused_ordering(460) 00:13:39.802 fused_ordering(461) 00:13:39.802 fused_ordering(462) 00:13:39.802 fused_ordering(463) 00:13:39.802 fused_ordering(464) 00:13:39.802 fused_ordering(465) 00:13:39.802 fused_ordering(466) 00:13:39.802 fused_ordering(467) 00:13:39.802 fused_ordering(468) 00:13:39.802 fused_ordering(469) 00:13:39.802 fused_ordering(470) 00:13:39.802 fused_ordering(471) 00:13:39.802 fused_ordering(472) 00:13:39.802 fused_ordering(473) 00:13:39.802 fused_ordering(474) 00:13:39.802 fused_ordering(475) 00:13:39.802 fused_ordering(476) 00:13:39.802 fused_ordering(477) 00:13:39.802 fused_ordering(478) 00:13:39.802 fused_ordering(479) 00:13:39.802 fused_ordering(480) 00:13:39.802 fused_ordering(481) 00:13:39.802 fused_ordering(482) 00:13:39.802 fused_ordering(483) 00:13:39.802 fused_ordering(484) 00:13:39.802 fused_ordering(485) 00:13:39.802 fused_ordering(486) 00:13:39.802 fused_ordering(487) 00:13:39.802 fused_ordering(488) 00:13:39.802 fused_ordering(489) 00:13:39.802 fused_ordering(490) 00:13:39.802 fused_ordering(491) 00:13:39.802 fused_ordering(492) 00:13:39.802 fused_ordering(493) 00:13:39.802 fused_ordering(494) 00:13:39.802 fused_ordering(495) 00:13:39.802 fused_ordering(496) 00:13:39.802 fused_ordering(497) 00:13:39.802 fused_ordering(498) 00:13:39.802 fused_ordering(499) 00:13:39.802 fused_ordering(500) 00:13:39.802 fused_ordering(501) 00:13:39.802 fused_ordering(502) 00:13:39.802 fused_ordering(503) 00:13:39.802 fused_ordering(504) 00:13:39.802 fused_ordering(505) 00:13:39.802 fused_ordering(506) 00:13:39.802 fused_ordering(507) 00:13:39.802 fused_ordering(508) 00:13:39.802 fused_ordering(509) 00:13:39.802 fused_ordering(510) 00:13:39.802 fused_ordering(511) 00:13:39.802 fused_ordering(512) 00:13:39.802 fused_ordering(513) 00:13:39.802 fused_ordering(514) 00:13:39.802 fused_ordering(515) 00:13:39.802 fused_ordering(516) 00:13:39.802 fused_ordering(517) 00:13:39.802 fused_ordering(518) 00:13:39.802 fused_ordering(519) 00:13:39.802 fused_ordering(520) 00:13:39.802 fused_ordering(521) 00:13:39.802 fused_ordering(522) 00:13:39.802 fused_ordering(523) 00:13:39.802 fused_ordering(524) 00:13:39.802 fused_ordering(525) 00:13:39.802 fused_ordering(526) 00:13:39.802 fused_ordering(527) 00:13:39.802 fused_ordering(528) 00:13:39.802 fused_ordering(529) 00:13:39.802 fused_ordering(530) 00:13:39.802 fused_ordering(531) 00:13:39.802 fused_ordering(532) 00:13:39.802 fused_ordering(533) 00:13:39.802 fused_ordering(534) 00:13:39.802 fused_ordering(535) 00:13:39.802 fused_ordering(536) 00:13:39.802 fused_ordering(537) 00:13:39.802 fused_ordering(538) 00:13:39.802 fused_ordering(539) 00:13:39.802 fused_ordering(540) 00:13:39.802 fused_ordering(541) 00:13:39.802 fused_ordering(542) 00:13:39.802 fused_ordering(543) 00:13:39.802 fused_ordering(544) 00:13:39.802 fused_ordering(545) 00:13:39.802 fused_ordering(546) 00:13:39.802 fused_ordering(547) 00:13:39.802 fused_ordering(548) 00:13:39.802 fused_ordering(549) 00:13:39.802 fused_ordering(550) 00:13:39.802 fused_ordering(551) 00:13:39.802 fused_ordering(552) 00:13:39.802 fused_ordering(553) 00:13:39.802 fused_ordering(554) 00:13:39.802 fused_ordering(555) 00:13:39.802 fused_ordering(556) 00:13:39.802 fused_ordering(557) 00:13:39.802 fused_ordering(558) 00:13:39.802 fused_ordering(559) 00:13:39.802 fused_ordering(560) 00:13:39.802 fused_ordering(561) 00:13:39.802 fused_ordering(562) 00:13:39.802 fused_ordering(563) 00:13:39.802 fused_ordering(564) 00:13:39.802 fused_ordering(565) 00:13:39.802 fused_ordering(566) 00:13:39.802 fused_ordering(567) 00:13:39.802 fused_ordering(568) 00:13:39.802 fused_ordering(569) 00:13:39.802 fused_ordering(570) 00:13:39.802 fused_ordering(571) 00:13:39.802 fused_ordering(572) 00:13:39.802 fused_ordering(573) 00:13:39.802 fused_ordering(574) 00:13:39.802 fused_ordering(575) 00:13:39.802 fused_ordering(576) 00:13:39.802 fused_ordering(577) 00:13:39.802 fused_ordering(578) 00:13:39.802 fused_ordering(579) 00:13:39.802 fused_ordering(580) 00:13:39.802 fused_ordering(581) 00:13:39.802 fused_ordering(582) 00:13:39.802 fused_ordering(583) 00:13:39.802 fused_ordering(584) 00:13:39.802 fused_ordering(585) 00:13:39.802 fused_ordering(586) 00:13:39.802 fused_ordering(587) 00:13:39.802 fused_ordering(588) 00:13:39.802 fused_ordering(589) 00:13:39.802 fused_ordering(590) 00:13:39.802 fused_ordering(591) 00:13:39.802 fused_ordering(592) 00:13:39.802 fused_ordering(593) 00:13:39.802 fused_ordering(594) 00:13:39.802 fused_ordering(595) 00:13:39.802 fused_ordering(596) 00:13:39.802 fused_ordering(597) 00:13:39.802 fused_ordering(598) 00:13:39.802 fused_ordering(599) 00:13:39.802 fused_ordering(600) 00:13:39.802 fused_ordering(601) 00:13:39.802 fused_ordering(602) 00:13:39.802 fused_ordering(603) 00:13:39.802 fused_ordering(604) 00:13:39.803 fused_ordering(605) 00:13:39.803 fused_ordering(606) 00:13:39.803 fused_ordering(607) 00:13:39.803 fused_ordering(608) 00:13:39.803 fused_ordering(609) 00:13:39.803 fused_ordering(610) 00:13:39.803 fused_ordering(611) 00:13:39.803 fused_ordering(612) 00:13:39.803 fused_ordering(613) 00:13:39.803 fused_ordering(614) 00:13:39.803 fused_ordering(615) 00:13:40.370 fused_ordering(616) 00:13:40.370 fused_ordering(617) 00:13:40.370 fused_ordering(618) 00:13:40.370 fused_ordering(619) 00:13:40.370 fused_ordering(620) 00:13:40.370 fused_ordering(621) 00:13:40.370 fused_ordering(622) 00:13:40.370 fused_ordering(623) 00:13:40.370 fused_ordering(624) 00:13:40.370 fused_ordering(625) 00:13:40.370 fused_ordering(626) 00:13:40.370 fused_ordering(627) 00:13:40.370 fused_ordering(628) 00:13:40.370 fused_ordering(629) 00:13:40.370 fused_ordering(630) 00:13:40.370 fused_ordering(631) 00:13:40.370 fused_ordering(632) 00:13:40.370 fused_ordering(633) 00:13:40.370 fused_ordering(634) 00:13:40.370 fused_ordering(635) 00:13:40.370 fused_ordering(636) 00:13:40.370 fused_ordering(637) 00:13:40.370 fused_ordering(638) 00:13:40.370 fused_ordering(639) 00:13:40.370 fused_ordering(640) 00:13:40.370 fused_ordering(641) 00:13:40.370 fused_ordering(642) 00:13:40.370 fused_ordering(643) 00:13:40.370 fused_ordering(644) 00:13:40.370 fused_ordering(645) 00:13:40.370 fused_ordering(646) 00:13:40.370 fused_ordering(647) 00:13:40.370 fused_ordering(648) 00:13:40.370 fused_ordering(649) 00:13:40.370 fused_ordering(650) 00:13:40.370 fused_ordering(651) 00:13:40.370 fused_ordering(652) 00:13:40.370 fused_ordering(653) 00:13:40.370 fused_ordering(654) 00:13:40.370 fused_ordering(655) 00:13:40.370 fused_ordering(656) 00:13:40.370 fused_ordering(657) 00:13:40.370 fused_ordering(658) 00:13:40.370 fused_ordering(659) 00:13:40.370 fused_ordering(660) 00:13:40.370 fused_ordering(661) 00:13:40.370 fused_ordering(662) 00:13:40.370 fused_ordering(663) 00:13:40.370 fused_ordering(664) 00:13:40.370 fused_ordering(665) 00:13:40.370 fused_ordering(666) 00:13:40.370 fused_ordering(667) 00:13:40.370 fused_ordering(668) 00:13:40.370 fused_ordering(669) 00:13:40.370 fused_ordering(670) 00:13:40.370 fused_ordering(671) 00:13:40.370 fused_ordering(672) 00:13:40.370 fused_ordering(673) 00:13:40.370 fused_ordering(674) 00:13:40.370 fused_ordering(675) 00:13:40.370 fused_ordering(676) 00:13:40.370 fused_ordering(677) 00:13:40.370 fused_ordering(678) 00:13:40.370 fused_ordering(679) 00:13:40.370 fused_ordering(680) 00:13:40.370 fused_ordering(681) 00:13:40.370 fused_ordering(682) 00:13:40.370 fused_ordering(683) 00:13:40.370 fused_ordering(684) 00:13:40.370 fused_ordering(685) 00:13:40.370 fused_ordering(686) 00:13:40.370 fused_ordering(687) 00:13:40.370 fused_ordering(688) 00:13:40.370 fused_ordering(689) 00:13:40.370 fused_ordering(690) 00:13:40.370 fused_ordering(691) 00:13:40.370 fused_ordering(692) 00:13:40.370 fused_ordering(693) 00:13:40.370 fused_ordering(694) 00:13:40.370 fused_ordering(695) 00:13:40.370 fused_ordering(696) 00:13:40.370 fused_ordering(697) 00:13:40.370 fused_ordering(698) 00:13:40.370 fused_ordering(699) 00:13:40.370 fused_ordering(700) 00:13:40.370 fused_ordering(701) 00:13:40.370 fused_ordering(702) 00:13:40.370 fused_ordering(703) 00:13:40.370 fused_ordering(704) 00:13:40.370 fused_ordering(705) 00:13:40.370 fused_ordering(706) 00:13:40.370 fused_ordering(707) 00:13:40.370 fused_ordering(708) 00:13:40.370 fused_ordering(709) 00:13:40.370 fused_ordering(710) 00:13:40.370 fused_ordering(711) 00:13:40.370 fused_ordering(712) 00:13:40.370 fused_ordering(713) 00:13:40.370 fused_ordering(714) 00:13:40.370 fused_ordering(715) 00:13:40.370 fused_ordering(716) 00:13:40.370 fused_ordering(717) 00:13:40.370 fused_ordering(718) 00:13:40.370 fused_ordering(719) 00:13:40.370 fused_ordering(720) 00:13:40.370 fused_ordering(721) 00:13:40.370 fused_ordering(722) 00:13:40.370 fused_ordering(723) 00:13:40.370 fused_ordering(724) 00:13:40.370 fused_ordering(725) 00:13:40.370 fused_ordering(726) 00:13:40.370 fused_ordering(727) 00:13:40.370 fused_ordering(728) 00:13:40.370 fused_ordering(729) 00:13:40.370 fused_ordering(730) 00:13:40.370 fused_ordering(731) 00:13:40.370 fused_ordering(732) 00:13:40.370 fused_ordering(733) 00:13:40.370 fused_ordering(734) 00:13:40.370 fused_ordering(735) 00:13:40.370 fused_ordering(736) 00:13:40.370 fused_ordering(737) 00:13:40.370 fused_ordering(738) 00:13:40.370 fused_ordering(739) 00:13:40.370 fused_ordering(740) 00:13:40.370 fused_ordering(741) 00:13:40.370 fused_ordering(742) 00:13:40.370 fused_ordering(743) 00:13:40.370 fused_ordering(744) 00:13:40.370 fused_ordering(745) 00:13:40.370 fused_ordering(746) 00:13:40.370 fused_ordering(747) 00:13:40.370 fused_ordering(748) 00:13:40.370 fused_ordering(749) 00:13:40.370 fused_ordering(750) 00:13:40.370 fused_ordering(751) 00:13:40.370 fused_ordering(752) 00:13:40.370 fused_ordering(753) 00:13:40.370 fused_ordering(754) 00:13:40.370 fused_ordering(755) 00:13:40.370 fused_ordering(756) 00:13:40.370 fused_ordering(757) 00:13:40.370 fused_ordering(758) 00:13:40.370 fused_ordering(759) 00:13:40.370 fused_ordering(760) 00:13:40.370 fused_ordering(761) 00:13:40.370 fused_ordering(762) 00:13:40.370 fused_ordering(763) 00:13:40.370 fused_ordering(764) 00:13:40.370 fused_ordering(765) 00:13:40.370 fused_ordering(766) 00:13:40.370 fused_ordering(767) 00:13:40.370 fused_ordering(768) 00:13:40.370 fused_ordering(769) 00:13:40.370 fused_ordering(770) 00:13:40.370 fused_ordering(771) 00:13:40.370 fused_ordering(772) 00:13:40.370 fused_ordering(773) 00:13:40.370 fused_ordering(774) 00:13:40.370 fused_ordering(775) 00:13:40.370 fused_ordering(776) 00:13:40.370 fused_ordering(777) 00:13:40.371 fused_ordering(778) 00:13:40.371 fused_ordering(779) 00:13:40.371 fused_ordering(780) 00:13:40.371 fused_ordering(781) 00:13:40.371 fused_ordering(782) 00:13:40.371 fused_ordering(783) 00:13:40.371 fused_ordering(784) 00:13:40.371 fused_ordering(785) 00:13:40.371 fused_ordering(786) 00:13:40.371 fused_ordering(787) 00:13:40.371 fused_ordering(788) 00:13:40.371 fused_ordering(789) 00:13:40.371 fused_ordering(790) 00:13:40.371 fused_ordering(791) 00:13:40.371 fused_ordering(792) 00:13:40.371 fused_ordering(793) 00:13:40.371 fused_ordering(794) 00:13:40.371 fused_ordering(795) 00:13:40.371 fused_ordering(796) 00:13:40.371 fused_ordering(797) 00:13:40.371 fused_ordering(798) 00:13:40.371 fused_ordering(799) 00:13:40.371 fused_ordering(800) 00:13:40.371 fused_ordering(801) 00:13:40.371 fused_ordering(802) 00:13:40.371 fused_ordering(803) 00:13:40.371 fused_ordering(804) 00:13:40.371 fused_ordering(805) 00:13:40.371 fused_ordering(806) 00:13:40.371 fused_ordering(807) 00:13:40.371 fused_ordering(808) 00:13:40.371 fused_ordering(809) 00:13:40.371 fused_ordering(810) 00:13:40.371 fused_ordering(811) 00:13:40.371 fused_ordering(812) 00:13:40.371 fused_ordering(813) 00:13:40.371 fused_ordering(814) 00:13:40.371 fused_ordering(815) 00:13:40.371 fused_ordering(816) 00:13:40.371 fused_ordering(817) 00:13:40.371 fused_ordering(818) 00:13:40.371 fused_ordering(819) 00:13:40.371 fused_ordering(820) 00:13:40.630 fused_ordering(821) 00:13:40.630 fused_ordering(822) 00:13:40.630 fused_ordering(823) 00:13:40.630 fused_ordering(824) 00:13:40.630 fused_ordering(825) 00:13:40.630 fused_ordering(826) 00:13:40.630 fused_ordering(827) 00:13:40.630 fused_ordering(828) 00:13:40.630 fused_ordering(829) 00:13:40.630 fused_ordering(830) 00:13:40.630 fused_ordering(831) 00:13:40.630 fused_ordering(832) 00:13:40.630 fused_ordering(833) 00:13:40.630 fused_ordering(834) 00:13:40.630 fused_ordering(835) 00:13:40.630 fused_ordering(836) 00:13:40.630 fused_ordering(837) 00:13:40.630 fused_ordering(838) 00:13:40.630 fused_ordering(839) 00:13:40.630 fused_ordering(840) 00:13:40.630 fused_ordering(841) 00:13:40.630 fused_ordering(842) 00:13:40.630 fused_ordering(843) 00:13:40.630 fused_ordering(844) 00:13:40.630 fused_ordering(845) 00:13:40.630 fused_ordering(846) 00:13:40.630 fused_ordering(847) 00:13:40.630 fused_ordering(848) 00:13:40.630 fused_ordering(849) 00:13:40.630 fused_ordering(850) 00:13:40.630 fused_ordering(851) 00:13:40.630 fused_ordering(852) 00:13:40.630 fused_ordering(853) 00:13:40.630 fused_ordering(854) 00:13:40.630 fused_ordering(855) 00:13:40.630 fused_ordering(856) 00:13:40.630 fused_ordering(857) 00:13:40.630 fused_ordering(858) 00:13:40.630 fused_ordering(859) 00:13:40.630 fused_ordering(860) 00:13:40.630 fused_ordering(861) 00:13:40.630 fused_ordering(862) 00:13:40.630 fused_ordering(863) 00:13:40.630 fused_ordering(864) 00:13:40.630 fused_ordering(865) 00:13:40.630 fused_ordering(866) 00:13:40.630 fused_ordering(867) 00:13:40.630 fused_ordering(868) 00:13:40.630 fused_ordering(869) 00:13:40.630 fused_ordering(870) 00:13:40.630 fused_ordering(871) 00:13:40.630 fused_ordering(872) 00:13:40.630 fused_ordering(873) 00:13:40.630 fused_ordering(874) 00:13:40.630 fused_ordering(875) 00:13:40.630 fused_ordering(876) 00:13:40.630 fused_ordering(877) 00:13:40.630 fused_ordering(878) 00:13:40.630 fused_ordering(879) 00:13:40.630 fused_ordering(880) 00:13:40.630 fused_ordering(881) 00:13:40.630 fused_ordering(882) 00:13:40.630 fused_ordering(883) 00:13:40.630 fused_ordering(884) 00:13:40.630 fused_ordering(885) 00:13:40.630 fused_ordering(886) 00:13:40.630 fused_ordering(887) 00:13:40.630 fused_ordering(888) 00:13:40.630 fused_ordering(889) 00:13:40.630 fused_ordering(890) 00:13:40.630 fused_ordering(891) 00:13:40.630 fused_ordering(892) 00:13:40.630 fused_ordering(893) 00:13:40.630 fused_ordering(894) 00:13:40.630 fused_ordering(895) 00:13:40.630 fused_ordering(896) 00:13:40.630 fused_ordering(897) 00:13:40.630 fused_ordering(898) 00:13:40.630 fused_ordering(899) 00:13:40.630 fused_ordering(900) 00:13:40.630 fused_ordering(901) 00:13:40.630 fused_ordering(902) 00:13:40.630 fused_ordering(903) 00:13:40.630 fused_ordering(904) 00:13:40.630 fused_ordering(905) 00:13:40.630 fused_ordering(906) 00:13:40.630 fused_ordering(907) 00:13:40.630 fused_ordering(908) 00:13:40.630 fused_ordering(909) 00:13:40.630 fused_ordering(910) 00:13:40.630 fused_ordering(911) 00:13:40.630 fused_ordering(912) 00:13:40.630 fused_ordering(913) 00:13:40.630 fused_ordering(914) 00:13:40.630 fused_ordering(915) 00:13:40.630 fused_ordering(916) 00:13:40.630 fused_ordering(917) 00:13:40.630 fused_ordering(918) 00:13:40.630 fused_ordering(919) 00:13:40.630 fused_ordering(920) 00:13:40.630 fused_ordering(921) 00:13:40.630 fused_ordering(922) 00:13:40.630 fused_ordering(923) 00:13:40.630 fused_ordering(924) 00:13:40.630 fused_ordering(925) 00:13:40.630 fused_ordering(926) 00:13:40.630 fused_ordering(927) 00:13:40.630 fused_ordering(928) 00:13:40.630 fused_ordering(929) 00:13:40.630 fused_ordering(930) 00:13:40.630 fused_ordering(931) 00:13:40.630 fused_ordering(932) 00:13:40.630 fused_ordering(933) 00:13:40.630 fused_ordering(934) 00:13:40.630 fused_ordering(935) 00:13:40.630 fused_ordering(936) 00:13:40.630 fused_ordering(937) 00:13:40.630 fused_ordering(938) 00:13:40.630 fused_ordering(939) 00:13:40.630 fused_ordering(940) 00:13:40.630 fused_ordering(941) 00:13:40.630 fused_ordering(942) 00:13:40.630 fused_ordering(943) 00:13:40.630 fused_ordering(944) 00:13:40.630 fused_ordering(945) 00:13:40.630 fused_ordering(946) 00:13:40.630 fused_ordering(947) 00:13:40.630 fused_ordering(948) 00:13:40.631 fused_ordering(949) 00:13:40.631 fused_ordering(950) 00:13:40.631 fused_ordering(951) 00:13:40.631 fused_ordering(952) 00:13:40.631 fused_ordering(953) 00:13:40.631 fused_ordering(954) 00:13:40.631 fused_ordering(955) 00:13:40.631 fused_ordering(956) 00:13:40.631 fused_ordering(957) 00:13:40.631 fused_ordering(958) 00:13:40.631 fused_ordering(959) 00:13:40.631 fused_ordering(960) 00:13:40.631 fused_ordering(961) 00:13:40.631 fused_ordering(962) 00:13:40.631 fused_ordering(963) 00:13:40.631 fused_ordering(964) 00:13:40.631 fused_ordering(965) 00:13:40.631 fused_ordering(966) 00:13:40.631 fused_ordering(967) 00:13:40.631 fused_ordering(968) 00:13:40.631 fused_ordering(969) 00:13:40.631 fused_ordering(970) 00:13:40.631 fused_ordering(971) 00:13:40.631 fused_ordering(972) 00:13:40.631 fused_ordering(973) 00:13:40.631 fused_ordering(974) 00:13:40.631 fused_ordering(975) 00:13:40.631 fused_ordering(976) 00:13:40.631 fused_ordering(977) 00:13:40.631 fused_ordering(978) 00:13:40.631 fused_ordering(979) 00:13:40.631 fused_ordering(980) 00:13:40.631 fused_ordering(981) 00:13:40.631 fused_ordering(982) 00:13:40.631 fused_ordering(983) 00:13:40.631 fused_ordering(984) 00:13:40.631 fused_ordering(985) 00:13:40.631 fused_ordering(986) 00:13:40.631 fused_ordering(987) 00:13:40.631 fused_ordering(988) 00:13:40.631 fused_ordering(989) 00:13:40.631 fused_ordering(990) 00:13:40.631 fused_ordering(991) 00:13:40.631 fused_ordering(992) 00:13:40.631 fused_ordering(993) 00:13:40.631 fused_ordering(994) 00:13:40.631 fused_ordering(995) 00:13:40.631 fused_ordering(996) 00:13:40.631 fused_ordering(997) 00:13:40.631 fused_ordering(998) 00:13:40.631 fused_ordering(999) 00:13:40.631 fused_ordering(1000) 00:13:40.631 fused_ordering(1001) 00:13:40.631 fused_ordering(1002) 00:13:40.631 fused_ordering(1003) 00:13:40.631 fused_ordering(1004) 00:13:40.631 fused_ordering(1005) 00:13:40.631 fused_ordering(1006) 00:13:40.631 fused_ordering(1007) 00:13:40.631 fused_ordering(1008) 00:13:40.631 fused_ordering(1009) 00:13:40.631 fused_ordering(1010) 00:13:40.631 fused_ordering(1011) 00:13:40.631 fused_ordering(1012) 00:13:40.631 fused_ordering(1013) 00:13:40.631 fused_ordering(1014) 00:13:40.631 fused_ordering(1015) 00:13:40.631 fused_ordering(1016) 00:13:40.631 fused_ordering(1017) 00:13:40.631 fused_ordering(1018) 00:13:40.631 fused_ordering(1019) 00:13:40.631 fused_ordering(1020) 00:13:40.631 fused_ordering(1021) 00:13:40.631 fused_ordering(1022) 00:13:40.631 fused_ordering(1023) 00:13:40.631 15:00:04 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:40.631 15:00:04 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:40.631 15:00:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:40.631 15:00:04 -- nvmf/common.sh@116 -- # sync 00:13:40.631 15:00:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:40.631 15:00:04 -- nvmf/common.sh@119 -- # set +e 00:13:40.631 15:00:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:40.631 15:00:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:40.631 rmmod nvme_tcp 00:13:40.631 rmmod nvme_fabrics 00:13:40.631 rmmod nvme_keyring 00:13:40.890 15:00:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:40.890 15:00:04 -- nvmf/common.sh@123 -- # set -e 00:13:40.890 15:00:04 -- nvmf/common.sh@124 -- # return 0 00:13:40.890 15:00:04 -- nvmf/common.sh@477 -- # '[' -n 82318 ']' 00:13:40.890 15:00:04 -- nvmf/common.sh@478 -- # killprocess 82318 00:13:40.890 15:00:04 -- common/autotest_common.sh@936 -- # '[' -z 82318 ']' 00:13:40.890 15:00:04 -- common/autotest_common.sh@940 -- # kill -0 82318 00:13:40.890 15:00:04 -- common/autotest_common.sh@941 -- # uname 00:13:40.890 15:00:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:40.890 15:00:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82318 00:13:40.890 15:00:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:40.890 15:00:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:40.890 killing process with pid 82318 00:13:40.890 15:00:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82318' 00:13:40.890 15:00:04 -- common/autotest_common.sh@955 -- # kill 82318 00:13:40.890 15:00:04 -- common/autotest_common.sh@960 -- # wait 82318 00:13:40.890 15:00:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:40.890 15:00:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:40.890 15:00:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:40.890 15:00:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.890 15:00:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:40.890 15:00:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.890 15:00:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.890 15:00:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.149 15:00:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:41.149 00:13:41.149 real 0m3.932s 00:13:41.149 user 0m4.544s 00:13:41.149 sys 0m1.325s 00:13:41.149 15:00:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:41.149 15:00:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.149 ************************************ 00:13:41.149 END TEST nvmf_fused_ordering 00:13:41.149 ************************************ 00:13:41.149 15:00:04 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:41.149 15:00:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:41.149 15:00:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.149 15:00:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.149 ************************************ 00:13:41.149 START TEST nvmf_delete_subsystem 00:13:41.149 ************************************ 00:13:41.149 15:00:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:41.149 * Looking for test storage... 00:13:41.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:41.149 15:00:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:41.149 15:00:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:41.149 15:00:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:41.149 15:00:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:41.149 15:00:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:41.149 15:00:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:41.149 15:00:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:41.149 15:00:04 -- scripts/common.sh@335 -- # IFS=.-: 00:13:41.149 15:00:04 -- scripts/common.sh@335 -- # read -ra ver1 00:13:41.149 15:00:04 -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.149 15:00:04 -- scripts/common.sh@336 -- # read -ra ver2 00:13:41.149 15:00:04 -- scripts/common.sh@337 -- # local 'op=<' 00:13:41.149 15:00:04 -- scripts/common.sh@339 -- # ver1_l=2 00:13:41.149 15:00:04 -- scripts/common.sh@340 -- # ver2_l=1 00:13:41.149 15:00:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:41.149 15:00:04 -- scripts/common.sh@343 -- # case "$op" in 00:13:41.149 15:00:04 -- scripts/common.sh@344 -- # : 1 00:13:41.149 15:00:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:41.149 15:00:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.149 15:00:04 -- scripts/common.sh@364 -- # decimal 1 00:13:41.149 15:00:04 -- scripts/common.sh@352 -- # local d=1 00:13:41.149 15:00:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.149 15:00:04 -- scripts/common.sh@354 -- # echo 1 00:13:41.149 15:00:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:41.149 15:00:04 -- scripts/common.sh@365 -- # decimal 2 00:13:41.149 15:00:04 -- scripts/common.sh@352 -- # local d=2 00:13:41.149 15:00:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.149 15:00:04 -- scripts/common.sh@354 -- # echo 2 00:13:41.149 15:00:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:41.149 15:00:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:41.149 15:00:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:41.149 15:00:04 -- scripts/common.sh@367 -- # return 0 00:13:41.149 15:00:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.149 15:00:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.149 --rc genhtml_branch_coverage=1 00:13:41.149 --rc genhtml_function_coverage=1 00:13:41.149 --rc genhtml_legend=1 00:13:41.149 --rc geninfo_all_blocks=1 00:13:41.149 --rc geninfo_unexecuted_blocks=1 00:13:41.149 00:13:41.149 ' 00:13:41.149 15:00:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.149 --rc genhtml_branch_coverage=1 00:13:41.149 --rc genhtml_function_coverage=1 00:13:41.149 --rc genhtml_legend=1 00:13:41.149 --rc geninfo_all_blocks=1 00:13:41.149 --rc geninfo_unexecuted_blocks=1 00:13:41.149 00:13:41.149 ' 00:13:41.149 15:00:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.149 --rc genhtml_branch_coverage=1 00:13:41.149 --rc genhtml_function_coverage=1 00:13:41.149 --rc genhtml_legend=1 00:13:41.149 --rc geninfo_all_blocks=1 00:13:41.149 --rc geninfo_unexecuted_blocks=1 00:13:41.149 00:13:41.149 ' 00:13:41.149 15:00:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:41.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.149 --rc genhtml_branch_coverage=1 00:13:41.149 --rc genhtml_function_coverage=1 00:13:41.149 --rc genhtml_legend=1 00:13:41.149 --rc geninfo_all_blocks=1 00:13:41.149 --rc geninfo_unexecuted_blocks=1 00:13:41.149 00:13:41.149 ' 00:13:41.149 15:00:04 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:41.149 15:00:04 -- nvmf/common.sh@7 -- # uname -s 00:13:41.149 15:00:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.149 15:00:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.149 15:00:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.149 15:00:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.149 15:00:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.149 15:00:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.149 15:00:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.149 15:00:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.149 15:00:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.149 15:00:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.408 15:00:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:41.408 15:00:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:41.408 15:00:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.408 15:00:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.409 15:00:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:41.409 15:00:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.409 15:00:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.409 15:00:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.409 15:00:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.409 15:00:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.409 15:00:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.409 15:00:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.409 15:00:04 -- paths/export.sh@5 -- # export PATH 00:13:41.409 15:00:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.409 15:00:04 -- nvmf/common.sh@46 -- # : 0 00:13:41.409 15:00:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:41.409 15:00:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:41.409 15:00:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:41.409 15:00:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.409 15:00:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.409 15:00:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:41.409 15:00:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:41.409 15:00:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:41.409 15:00:04 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:41.409 15:00:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:41.409 15:00:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.409 15:00:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:41.409 15:00:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:41.409 15:00:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:41.409 15:00:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.409 15:00:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.409 15:00:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.409 15:00:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:41.409 15:00:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:41.409 15:00:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:41.409 15:00:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:41.409 15:00:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:41.409 15:00:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:41.409 15:00:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.409 15:00:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.409 15:00:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:41.409 15:00:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:41.409 15:00:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:41.409 15:00:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:41.409 15:00:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:41.409 15:00:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.409 15:00:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:41.409 15:00:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:41.409 15:00:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:41.409 15:00:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:41.409 15:00:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:41.409 15:00:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:41.409 Cannot find device "nvmf_tgt_br" 00:13:41.409 15:00:04 -- nvmf/common.sh@154 -- # true 00:13:41.409 15:00:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:41.409 Cannot find device "nvmf_tgt_br2" 00:13:41.409 15:00:04 -- nvmf/common.sh@155 -- # true 00:13:41.409 15:00:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:41.409 15:00:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:41.409 Cannot find device "nvmf_tgt_br" 00:13:41.409 15:00:04 -- nvmf/common.sh@157 -- # true 00:13:41.409 15:00:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:41.409 Cannot find device "nvmf_tgt_br2" 00:13:41.409 15:00:04 -- nvmf/common.sh@158 -- # true 00:13:41.409 15:00:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:41.409 15:00:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:41.409 15:00:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.409 15:00:04 -- nvmf/common.sh@161 -- # true 00:13:41.409 15:00:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.409 15:00:04 -- nvmf/common.sh@162 -- # true 00:13:41.409 15:00:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:41.409 15:00:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:41.409 15:00:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:41.409 15:00:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:41.409 15:00:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:41.409 15:00:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:41.409 15:00:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:41.409 15:00:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:41.409 15:00:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:41.409 15:00:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:41.409 15:00:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:41.409 15:00:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:41.409 15:00:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:41.409 15:00:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:41.409 15:00:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:41.409 15:00:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:41.668 15:00:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:41.668 15:00:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:41.668 15:00:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:41.668 15:00:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:41.668 15:00:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:41.668 15:00:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:41.668 15:00:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:41.668 15:00:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:41.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:41.668 00:13:41.668 --- 10.0.0.2 ping statistics --- 00:13:41.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.668 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:41.668 15:00:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:41.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:41.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:13:41.668 00:13:41.668 --- 10.0.0.3 ping statistics --- 00:13:41.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.668 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:41.668 15:00:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:41.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:13:41.668 00:13:41.668 --- 10.0.0.1 ping statistics --- 00:13:41.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.668 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:41.668 15:00:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.668 15:00:05 -- nvmf/common.sh@421 -- # return 0 00:13:41.668 15:00:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:41.668 15:00:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.668 15:00:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:41.668 15:00:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:41.668 15:00:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.668 15:00:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:41.668 15:00:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:41.668 15:00:05 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:41.668 15:00:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:41.668 15:00:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:41.668 15:00:05 -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 15:00:05 -- nvmf/common.sh@469 -- # nvmfpid=82584 00:13:41.668 15:00:05 -- nvmf/common.sh@470 -- # waitforlisten 82584 00:13:41.668 15:00:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:41.668 15:00:05 -- common/autotest_common.sh@829 -- # '[' -z 82584 ']' 00:13:41.668 15:00:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.668 15:00:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.668 15:00:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.668 15:00:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.668 15:00:05 -- common/autotest_common.sh@10 -- # set +x 00:13:41.668 [2024-11-18 15:00:05.152486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:41.668 [2024-11-18 15:00:05.152560] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.927 [2024-11-18 15:00:05.291815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:41.927 [2024-11-18 15:00:05.363095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:41.927 [2024-11-18 15:00:05.363282] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.927 [2024-11-18 15:00:05.363299] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.928 [2024-11-18 15:00:05.363310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.928 [2024-11-18 15:00:05.363483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.928 [2024-11-18 15:00:05.363492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.864 15:00:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.864 15:00:06 -- common/autotest_common.sh@862 -- # return 0 00:13:42.864 15:00:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:42.864 15:00:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.864 15:00:06 -- common/autotest_common.sh@10 -- # set +x 00:13:42.864 15:00:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.864 15:00:06 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.864 15:00:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.864 15:00:06 -- common/autotest_common.sh@10 -- # set +x 00:13:42.864 [2024-11-18 15:00:06.245182] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.864 15:00:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.864 15:00:06 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:42.864 15:00:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.864 15:00:06 -- common/autotest_common.sh@10 -- # set +x 00:13:42.864 15:00:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.864 15:00:06 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.864 15:00:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.864 15:00:06 -- common/autotest_common.sh@10 -- # set +x 00:13:42.865 [2024-11-18 15:00:06.261369] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.865 15:00:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.865 15:00:06 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:42.865 15:00:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.865 15:00:06 -- common/autotest_common.sh@10 -- # set +x 00:13:42.865 NULL1 00:13:42.865 15:00:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.865 15:00:06 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:42.865 15:00:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.865 15:00:06 -- common/autotest_common.sh@10 -- # set +x 00:13:42.865 Delay0 00:13:42.865 15:00:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.865 15:00:06 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.865 15:00:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.865 15:00:06 -- common/autotest_common.sh@10 -- # set +x 00:13:42.865 15:00:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.865 15:00:06 -- target/delete_subsystem.sh@28 -- # perf_pid=82636 00:13:42.865 15:00:06 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:42.865 15:00:06 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:43.123 [2024-11-18 15:00:06.466050] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:45.028 15:00:08 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.028 15:00:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.028 15:00:08 -- common/autotest_common.sh@10 -- # set +x 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 [2024-11-18 15:00:08.505718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4bce70 is same with the state(5) to be set 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Write completed with error (sct=0, sc=8) 00:13:45.028 starting I/O failed: -6 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.028 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Write completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Write completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 Write completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Write completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Write completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Write completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 Write completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 Write completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 Read completed with error (sct=0, sc=8) 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.029 starting I/O failed: -6 00:13:45.965 [2024-11-18 15:00:09.479908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4bb070 is same with the state(5) to be set 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 [2024-11-18 15:00:09.508056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5de000bf20 is same with the state(5) to be set 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Write completed with error (sct=0, sc=8) 00:13:45.965 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 [2024-11-18 15:00:09.508586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4bcbc0 is same with the state(5) to be set 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 [2024-11-18 15:00:09.508851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4bd120 is same with the state(5) to be set 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Write completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 Read completed with error (sct=0, sc=8) 00:13:45.966 [2024-11-18 15:00:09.509766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5de000c600 is same with the state(5) to be set 00:13:45.966 [2024-11-18 15:00:09.510649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4bb070 (9): Bad file descriptor 00:13:45.966 15:00:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.966 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:45.966 15:00:09 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:45.966 15:00:09 -- target/delete_subsystem.sh@35 -- # kill -0 82636 00:13:45.966 15:00:09 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:45.966 Initializing NVMe Controllers 00:13:45.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:45.966 Controller IO queue size 128, less than required. 00:13:45.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:45.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:45.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:45.966 Initialization complete. Launching workers. 00:13:45.966 ======================================================== 00:13:45.966 Latency(us) 00:13:45.966 Device Information : IOPS MiB/s Average min max 00:13:45.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.54 0.08 910908.07 553.80 2004306.59 00:13:45.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.51 0.08 1001599.92 1330.96 2004140.45 00:13:45.966 ======================================================== 00:13:45.966 Total : 333.05 0.16 956793.83 553.80 2004306.59 00:13:45.966 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@35 -- # kill -0 82636 00:13:46.534 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82636) - No such process 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@45 -- # NOT wait 82636 00:13:46.534 15:00:10 -- common/autotest_common.sh@650 -- # local es=0 00:13:46.534 15:00:10 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82636 00:13:46.534 15:00:10 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:46.534 15:00:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.534 15:00:10 -- common/autotest_common.sh@642 -- # type -t wait 00:13:46.534 15:00:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.534 15:00:10 -- common/autotest_common.sh@653 -- # wait 82636 00:13:46.534 15:00:10 -- common/autotest_common.sh@653 -- # es=1 00:13:46.534 15:00:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.534 15:00:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.534 15:00:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:46.534 15:00:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.534 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:13:46.534 15:00:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.534 15:00:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.534 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:13:46.534 [2024-11-18 15:00:10.034601] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.534 15:00:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.534 15:00:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.534 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:13:46.534 15:00:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@54 -- # perf_pid=82683 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@57 -- # kill -0 82683 00:13:46.534 15:00:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:46.793 [2024-11-18 15:00:10.217280] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:47.053 15:00:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:47.053 15:00:10 -- target/delete_subsystem.sh@57 -- # kill -0 82683 00:13:47.053 15:00:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:47.620 15:00:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:47.620 15:00:11 -- target/delete_subsystem.sh@57 -- # kill -0 82683 00:13:47.620 15:00:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:48.188 15:00:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:48.188 15:00:11 -- target/delete_subsystem.sh@57 -- # kill -0 82683 00:13:48.188 15:00:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:48.756 15:00:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:48.756 15:00:12 -- target/delete_subsystem.sh@57 -- # kill -0 82683 00:13:48.756 15:00:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:49.014 15:00:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:49.014 15:00:12 -- target/delete_subsystem.sh@57 -- # kill -0 82683 00:13:49.014 15:00:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:49.581 15:00:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:49.581 15:00:13 -- target/delete_subsystem.sh@57 -- # kill -0 82683 00:13:49.581 15:00:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:49.839 Initializing NVMe Controllers 00:13:49.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.839 Controller IO queue size 128, less than required. 00:13:49.839 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:49.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:49.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:49.839 Initialization complete. Launching workers. 00:13:49.839 ======================================================== 00:13:49.839 Latency(us) 00:13:49.839 Device Information : IOPS MiB/s Average min max 00:13:49.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004509.80 1000165.99 1040891.39 00:13:49.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006700.69 1000238.46 1016946.15 00:13:49.839 ======================================================== 00:13:49.839 Total : 256.00 0.12 1005605.24 1000165.99 1040891.39 00:13:49.839 00:13:50.098 15:00:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:50.098 15:00:13 -- target/delete_subsystem.sh@57 -- # kill -0 82683 00:13:50.098 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82683) - No such process 00:13:50.098 15:00:13 -- target/delete_subsystem.sh@67 -- # wait 82683 00:13:50.098 15:00:13 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:50.098 15:00:13 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:50.098 15:00:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:50.098 15:00:13 -- nvmf/common.sh@116 -- # sync 00:13:50.098 15:00:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:50.098 15:00:13 -- nvmf/common.sh@119 -- # set +e 00:13:50.098 15:00:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:50.098 15:00:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:50.098 rmmod nvme_tcp 00:13:50.098 rmmod nvme_fabrics 00:13:50.098 rmmod nvme_keyring 00:13:50.356 15:00:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:50.356 15:00:13 -- nvmf/common.sh@123 -- # set -e 00:13:50.356 15:00:13 -- nvmf/common.sh@124 -- # return 0 00:13:50.356 15:00:13 -- nvmf/common.sh@477 -- # '[' -n 82584 ']' 00:13:50.356 15:00:13 -- nvmf/common.sh@478 -- # killprocess 82584 00:13:50.356 15:00:13 -- common/autotest_common.sh@936 -- # '[' -z 82584 ']' 00:13:50.356 15:00:13 -- common/autotest_common.sh@940 -- # kill -0 82584 00:13:50.356 15:00:13 -- common/autotest_common.sh@941 -- # uname 00:13:50.356 15:00:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:50.356 15:00:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82584 00:13:50.356 15:00:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:50.356 15:00:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:50.356 killing process with pid 82584 00:13:50.356 15:00:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82584' 00:13:50.356 15:00:13 -- common/autotest_common.sh@955 -- # kill 82584 00:13:50.356 15:00:13 -- common/autotest_common.sh@960 -- # wait 82584 00:13:50.356 15:00:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:50.356 15:00:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:50.356 15:00:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:50.356 15:00:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.356 15:00:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:50.356 15:00:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.356 15:00:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.356 15:00:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.614 15:00:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:50.614 00:13:50.614 real 0m9.420s 00:13:50.614 user 0m29.377s 00:13:50.614 sys 0m1.188s 00:13:50.614 15:00:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:50.614 ************************************ 00:13:50.614 END TEST nvmf_delete_subsystem 00:13:50.614 ************************************ 00:13:50.614 15:00:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.614 15:00:13 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:50.614 15:00:13 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:50.614 15:00:13 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:50.614 15:00:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:50.614 15:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.614 15:00:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.614 ************************************ 00:13:50.614 START TEST nvmf_host_management 00:13:50.614 ************************************ 00:13:50.614 15:00:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:50.614 * Looking for test storage... 00:13:50.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.614 15:00:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:50.614 15:00:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:50.614 15:00:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:50.614 15:00:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:50.614 15:00:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:50.614 15:00:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:50.614 15:00:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:50.614 15:00:14 -- scripts/common.sh@335 -- # IFS=.-: 00:13:50.614 15:00:14 -- scripts/common.sh@335 -- # read -ra ver1 00:13:50.614 15:00:14 -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.614 15:00:14 -- scripts/common.sh@336 -- # read -ra ver2 00:13:50.614 15:00:14 -- scripts/common.sh@337 -- # local 'op=<' 00:13:50.614 15:00:14 -- scripts/common.sh@339 -- # ver1_l=2 00:13:50.614 15:00:14 -- scripts/common.sh@340 -- # ver2_l=1 00:13:50.614 15:00:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:50.614 15:00:14 -- scripts/common.sh@343 -- # case "$op" in 00:13:50.614 15:00:14 -- scripts/common.sh@344 -- # : 1 00:13:50.614 15:00:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:50.614 15:00:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.614 15:00:14 -- scripts/common.sh@364 -- # decimal 1 00:13:50.614 15:00:14 -- scripts/common.sh@352 -- # local d=1 00:13:50.614 15:00:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.614 15:00:14 -- scripts/common.sh@354 -- # echo 1 00:13:50.614 15:00:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:50.614 15:00:14 -- scripts/common.sh@365 -- # decimal 2 00:13:50.614 15:00:14 -- scripts/common.sh@352 -- # local d=2 00:13:50.614 15:00:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.614 15:00:14 -- scripts/common.sh@354 -- # echo 2 00:13:50.614 15:00:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:50.614 15:00:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:50.614 15:00:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:50.614 15:00:14 -- scripts/common.sh@367 -- # return 0 00:13:50.614 15:00:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.614 15:00:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:50.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.614 --rc genhtml_branch_coverage=1 00:13:50.614 --rc genhtml_function_coverage=1 00:13:50.614 --rc genhtml_legend=1 00:13:50.614 --rc geninfo_all_blocks=1 00:13:50.614 --rc geninfo_unexecuted_blocks=1 00:13:50.614 00:13:50.614 ' 00:13:50.614 15:00:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:50.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.614 --rc genhtml_branch_coverage=1 00:13:50.614 --rc genhtml_function_coverage=1 00:13:50.614 --rc genhtml_legend=1 00:13:50.614 --rc geninfo_all_blocks=1 00:13:50.614 --rc geninfo_unexecuted_blocks=1 00:13:50.614 00:13:50.614 ' 00:13:50.614 15:00:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:50.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.614 --rc genhtml_branch_coverage=1 00:13:50.614 --rc genhtml_function_coverage=1 00:13:50.614 --rc genhtml_legend=1 00:13:50.614 --rc geninfo_all_blocks=1 00:13:50.614 --rc geninfo_unexecuted_blocks=1 00:13:50.614 00:13:50.614 ' 00:13:50.614 15:00:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:50.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.614 --rc genhtml_branch_coverage=1 00:13:50.614 --rc genhtml_function_coverage=1 00:13:50.614 --rc genhtml_legend=1 00:13:50.614 --rc geninfo_all_blocks=1 00:13:50.614 --rc geninfo_unexecuted_blocks=1 00:13:50.614 00:13:50.614 ' 00:13:50.614 15:00:14 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.614 15:00:14 -- nvmf/common.sh@7 -- # uname -s 00:13:50.614 15:00:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.614 15:00:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.614 15:00:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.614 15:00:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.614 15:00:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.614 15:00:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.614 15:00:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.614 15:00:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.614 15:00:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.614 15:00:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.614 15:00:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:50.614 15:00:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:50.614 15:00:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.614 15:00:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.614 15:00:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.614 15:00:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.873 15:00:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.873 15:00:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.873 15:00:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.873 15:00:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.873 15:00:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.873 15:00:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.873 15:00:14 -- paths/export.sh@5 -- # export PATH 00:13:50.873 15:00:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.873 15:00:14 -- nvmf/common.sh@46 -- # : 0 00:13:50.873 15:00:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:50.873 15:00:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:50.873 15:00:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:50.873 15:00:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.873 15:00:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.873 15:00:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:50.873 15:00:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:50.873 15:00:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:50.873 15:00:14 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.873 15:00:14 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.873 15:00:14 -- target/host_management.sh@104 -- # nvmftestinit 00:13:50.873 15:00:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:50.873 15:00:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.873 15:00:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:50.873 15:00:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:50.873 15:00:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:50.873 15:00:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.873 15:00:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.873 15:00:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.873 15:00:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:50.873 15:00:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:50.873 15:00:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:50.873 15:00:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:50.873 15:00:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:50.873 15:00:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:50.873 15:00:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.873 15:00:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.873 15:00:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.873 15:00:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:50.873 15:00:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.873 15:00:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.873 15:00:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.873 15:00:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.873 15:00:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.873 15:00:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.873 15:00:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.873 15:00:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.873 15:00:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:50.873 15:00:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:50.873 Cannot find device "nvmf_tgt_br" 00:13:50.873 15:00:14 -- nvmf/common.sh@154 -- # true 00:13:50.873 15:00:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.873 Cannot find device "nvmf_tgt_br2" 00:13:50.873 15:00:14 -- nvmf/common.sh@155 -- # true 00:13:50.873 15:00:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:50.873 15:00:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:50.873 Cannot find device "nvmf_tgt_br" 00:13:50.873 15:00:14 -- nvmf/common.sh@157 -- # true 00:13:50.873 15:00:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:50.873 Cannot find device "nvmf_tgt_br2" 00:13:50.873 15:00:14 -- nvmf/common.sh@158 -- # true 00:13:50.873 15:00:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:50.873 15:00:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:50.873 15:00:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.873 15:00:14 -- nvmf/common.sh@161 -- # true 00:13:50.873 15:00:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.873 15:00:14 -- nvmf/common.sh@162 -- # true 00:13:50.873 15:00:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.873 15:00:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.873 15:00:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.873 15:00:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.873 15:00:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.873 15:00:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.873 15:00:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.873 15:00:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:50.873 15:00:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:50.873 15:00:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:50.873 15:00:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:50.873 15:00:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:50.873 15:00:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:50.873 15:00:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.873 15:00:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.873 15:00:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.131 15:00:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:51.131 15:00:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:51.131 15:00:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.131 15:00:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.131 15:00:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.131 15:00:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.131 15:00:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:51.131 15:00:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:51.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:51.131 00:13:51.131 --- 10.0.0.2 ping statistics --- 00:13:51.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.131 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:51.131 15:00:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:51.131 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.131 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:13:51.131 00:13:51.131 --- 10.0.0.3 ping statistics --- 00:13:51.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.131 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:51.131 15:00:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:51.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:51.131 00:13:51.131 --- 10.0.0.1 ping statistics --- 00:13:51.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.131 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:51.131 15:00:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.131 15:00:14 -- nvmf/common.sh@421 -- # return 0 00:13:51.131 15:00:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:51.131 15:00:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.131 15:00:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:51.131 15:00:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:51.131 15:00:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.131 15:00:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:51.131 15:00:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:51.131 15:00:14 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:51.131 15:00:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:51.131 15:00:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.131 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:13:51.131 ************************************ 00:13:51.131 START TEST nvmf_host_management 00:13:51.131 ************************************ 00:13:51.131 15:00:14 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:13:51.131 15:00:14 -- target/host_management.sh@69 -- # starttarget 00:13:51.131 15:00:14 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:51.131 15:00:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:51.131 15:00:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.131 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:13:51.131 15:00:14 -- nvmf/common.sh@469 -- # nvmfpid=82924 00:13:51.131 15:00:14 -- nvmf/common.sh@470 -- # waitforlisten 82924 00:13:51.131 15:00:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:51.131 15:00:14 -- common/autotest_common.sh@829 -- # '[' -z 82924 ']' 00:13:51.131 15:00:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.131 15:00:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.131 15:00:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.131 15:00:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.131 15:00:14 -- common/autotest_common.sh@10 -- # set +x 00:13:51.131 [2024-11-18 15:00:14.623011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:51.131 [2024-11-18 15:00:14.623105] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.389 [2024-11-18 15:00:14.763906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.389 [2024-11-18 15:00:14.835521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.389 [2024-11-18 15:00:14.835655] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.389 [2024-11-18 15:00:14.835668] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.389 [2024-11-18 15:00:14.835675] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.389 [2024-11-18 15:00:14.835809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.389 [2024-11-18 15:00:14.836715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.389 [2024-11-18 15:00:14.836886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:51.389 [2024-11-18 15:00:14.836891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.325 15:00:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.325 15:00:15 -- common/autotest_common.sh@862 -- # return 0 00:13:52.325 15:00:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.325 15:00:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.325 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:13:52.325 15:00:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.325 15:00:15 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.325 15:00:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.325 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:13:52.325 [2024-11-18 15:00:15.595231] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.325 15:00:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.325 15:00:15 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:52.325 15:00:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.325 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:13:52.325 15:00:15 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:52.325 15:00:15 -- target/host_management.sh@23 -- # cat 00:13:52.325 15:00:15 -- target/host_management.sh@30 -- # rpc_cmd 00:13:52.325 15:00:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.325 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:13:52.325 Malloc0 00:13:52.325 [2024-11-18 15:00:15.672204] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.325 15:00:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.325 15:00:15 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:52.325 15:00:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.325 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:13:52.325 15:00:15 -- target/host_management.sh@73 -- # perfpid=82996 00:13:52.325 15:00:15 -- target/host_management.sh@74 -- # waitforlisten 82996 /var/tmp/bdevperf.sock 00:13:52.325 15:00:15 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:52.325 15:00:15 -- common/autotest_common.sh@829 -- # '[' -z 82996 ']' 00:13:52.325 15:00:15 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:52.325 15:00:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.325 15:00:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.325 15:00:15 -- nvmf/common.sh@520 -- # config=() 00:13:52.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.325 15:00:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.325 15:00:15 -- nvmf/common.sh@520 -- # local subsystem config 00:13:52.325 15:00:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.325 15:00:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:52.325 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:13:52.325 15:00:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:52.325 { 00:13:52.325 "params": { 00:13:52.325 "name": "Nvme$subsystem", 00:13:52.325 "trtype": "$TEST_TRANSPORT", 00:13:52.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.325 "adrfam": "ipv4", 00:13:52.325 "trsvcid": "$NVMF_PORT", 00:13:52.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.325 "hdgst": ${hdgst:-false}, 00:13:52.325 "ddgst": ${ddgst:-false} 00:13:52.325 }, 00:13:52.325 "method": "bdev_nvme_attach_controller" 00:13:52.325 } 00:13:52.325 EOF 00:13:52.325 )") 00:13:52.325 15:00:15 -- nvmf/common.sh@542 -- # cat 00:13:52.325 15:00:15 -- nvmf/common.sh@544 -- # jq . 00:13:52.325 15:00:15 -- nvmf/common.sh@545 -- # IFS=, 00:13:52.325 15:00:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:52.325 "params": { 00:13:52.325 "name": "Nvme0", 00:13:52.325 "trtype": "tcp", 00:13:52.325 "traddr": "10.0.0.2", 00:13:52.325 "adrfam": "ipv4", 00:13:52.325 "trsvcid": "4420", 00:13:52.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:52.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:52.325 "hdgst": false, 00:13:52.325 "ddgst": false 00:13:52.325 }, 00:13:52.325 "method": "bdev_nvme_attach_controller" 00:13:52.325 }' 00:13:52.325 [2024-11-18 15:00:15.777552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:52.325 [2024-11-18 15:00:15.777628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82996 ] 00:13:52.584 [2024-11-18 15:00:15.921403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.584 [2024-11-18 15:00:15.980554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.584 Running I/O for 10 seconds... 00:13:53.522 15:00:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.522 15:00:16 -- common/autotest_common.sh@862 -- # return 0 00:13:53.522 15:00:16 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:53.522 15:00:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.522 15:00:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.522 15:00:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.522 15:00:16 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:53.522 15:00:16 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:53.522 15:00:16 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:53.522 15:00:16 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:53.522 15:00:16 -- target/host_management.sh@52 -- # local ret=1 00:13:53.522 15:00:16 -- target/host_management.sh@53 -- # local i 00:13:53.522 15:00:16 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:53.522 15:00:16 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:53.522 15:00:16 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:53.522 15:00:16 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:53.522 15:00:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.522 15:00:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.522 15:00:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.522 15:00:16 -- target/host_management.sh@55 -- # read_io_count=2563 00:13:53.522 15:00:16 -- target/host_management.sh@58 -- # '[' 2563 -ge 100 ']' 00:13:53.522 15:00:16 -- target/host_management.sh@59 -- # ret=0 00:13:53.522 15:00:16 -- target/host_management.sh@60 -- # break 00:13:53.522 15:00:16 -- target/host_management.sh@64 -- # return 0 00:13:53.522 15:00:16 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:53.522 15:00:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.522 15:00:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.522 [2024-11-18 15:00:16.871516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.522 [2024-11-18 15:00:16.871786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.871856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093e70 is same with the state(5) to be set 00:13:53.523 [2024-11-18 15:00:16.874964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.523 [2024-11-18 15:00:16.875629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.523 [2024-11-18 15:00:16.875640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.875988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.875998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.524 [2024-11-18 15:00:16.876305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.524 [2024-11-18 15:00:16.876329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.525 [2024-11-18 15:00:16.876355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.525 [2024-11-18 15:00:16.876364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.525 [2024-11-18 15:00:16.876379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.525 [2024-11-18 15:00:16.876388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.525 [2024-11-18 15:00:16.876399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.525 [2024-11-18 15:00:16.876418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.525 [2024-11-18 15:00:16.876540] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x241bdc0 was disconnected and freed. reset controller. 00:13:53.525 [2024-11-18 15:00:16.877740] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:53.525 15:00:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.525 15:00:16 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:53.525 15:00:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.525 task offset: 87552 on job bdev=Nvme0n1 fails 00:13:53.525 00:13:53.525 Latency(us) 00:13:53.525 [2024-11-18T15:00:17.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.525 [2024-11-18T15:00:17.115Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:53.525 [2024-11-18T15:00:17.115Z] Job: Nvme0n1 ended in about 0.72 seconds with error 00:13:53.525 Verification LBA range: start 0x0 length 0x400 00:13:53.525 Nvme0n1 : 0.72 3813.36 238.33 88.65 0.00 16147.79 1966.08 22401.40 00:13:53.525 [2024-11-18T15:00:17.115Z] =================================================================================================================== 00:13:53.525 [2024-11-18T15:00:17.115Z] Total : 3813.36 238.33 88.65 0.00 16147.79 1966.08 22401.40 00:13:53.525 15:00:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.525 [2024-11-18 15:00:16.879663] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:53.525 [2024-11-18 15:00:16.879701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2377a70 (9): Bad file descriptor 00:13:53.525 [2024-11-18 15:00:16.885159] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:53.525 15:00:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.525 15:00:16 -- target/host_management.sh@87 -- # sleep 1 00:13:54.462 15:00:17 -- target/host_management.sh@91 -- # kill -9 82996 00:13:54.462 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82996) - No such process 00:13:54.462 15:00:17 -- target/host_management.sh@91 -- # true 00:13:54.462 15:00:17 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:54.462 15:00:17 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:54.462 15:00:17 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:54.462 15:00:17 -- nvmf/common.sh@520 -- # config=() 00:13:54.462 15:00:17 -- nvmf/common.sh@520 -- # local subsystem config 00:13:54.462 15:00:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:54.462 15:00:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:54.462 { 00:13:54.462 "params": { 00:13:54.462 "name": "Nvme$subsystem", 00:13:54.462 "trtype": "$TEST_TRANSPORT", 00:13:54.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.462 "adrfam": "ipv4", 00:13:54.462 "trsvcid": "$NVMF_PORT", 00:13:54.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.462 "hdgst": ${hdgst:-false}, 00:13:54.462 "ddgst": ${ddgst:-false} 00:13:54.462 }, 00:13:54.462 "method": "bdev_nvme_attach_controller" 00:13:54.462 } 00:13:54.462 EOF 00:13:54.462 )") 00:13:54.462 15:00:17 -- nvmf/common.sh@542 -- # cat 00:13:54.462 15:00:17 -- nvmf/common.sh@544 -- # jq . 00:13:54.462 15:00:17 -- nvmf/common.sh@545 -- # IFS=, 00:13:54.462 15:00:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:54.462 "params": { 00:13:54.462 "name": "Nvme0", 00:13:54.462 "trtype": "tcp", 00:13:54.462 "traddr": "10.0.0.2", 00:13:54.462 "adrfam": "ipv4", 00:13:54.462 "trsvcid": "4420", 00:13:54.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:54.462 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:54.462 "hdgst": false, 00:13:54.462 "ddgst": false 00:13:54.462 }, 00:13:54.462 "method": "bdev_nvme_attach_controller" 00:13:54.462 }' 00:13:54.462 [2024-11-18 15:00:17.955915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:54.462 [2024-11-18 15:00:17.956028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83047 ] 00:13:54.721 [2024-11-18 15:00:18.097466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.721 [2024-11-18 15:00:18.159075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.978 Running I/O for 1 seconds... 00:13:55.912 00:13:55.912 Latency(us) 00:13:55.912 [2024-11-18T15:00:19.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.912 [2024-11-18T15:00:19.502Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:55.912 Verification LBA range: start 0x0 length 0x400 00:13:55.912 Nvme0n1 : 1.01 3999.96 250.00 0.00 0.00 15734.17 830.37 21686.46 00:13:55.912 [2024-11-18T15:00:19.502Z] =================================================================================================================== 00:13:55.912 [2024-11-18T15:00:19.502Z] Total : 3999.96 250.00 0.00 0.00 15734.17 830.37 21686.46 00:13:56.171 15:00:19 -- target/host_management.sh@101 -- # stoptarget 00:13:56.171 15:00:19 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:56.171 15:00:19 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:56.171 15:00:19 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:56.171 15:00:19 -- target/host_management.sh@40 -- # nvmftestfini 00:13:56.171 15:00:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:56.171 15:00:19 -- nvmf/common.sh@116 -- # sync 00:13:56.171 15:00:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:56.171 15:00:19 -- nvmf/common.sh@119 -- # set +e 00:13:56.171 15:00:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:56.171 15:00:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:56.171 rmmod nvme_tcp 00:13:56.171 rmmod nvme_fabrics 00:13:56.171 rmmod nvme_keyring 00:13:56.171 15:00:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:56.171 15:00:19 -- nvmf/common.sh@123 -- # set -e 00:13:56.171 15:00:19 -- nvmf/common.sh@124 -- # return 0 00:13:56.171 15:00:19 -- nvmf/common.sh@477 -- # '[' -n 82924 ']' 00:13:56.171 15:00:19 -- nvmf/common.sh@478 -- # killprocess 82924 00:13:56.171 15:00:19 -- common/autotest_common.sh@936 -- # '[' -z 82924 ']' 00:13:56.171 15:00:19 -- common/autotest_common.sh@940 -- # kill -0 82924 00:13:56.171 15:00:19 -- common/autotest_common.sh@941 -- # uname 00:13:56.171 15:00:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:56.171 15:00:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82924 00:13:56.171 15:00:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:56.171 killing process with pid 82924 00:13:56.171 15:00:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:56.171 15:00:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82924' 00:13:56.171 15:00:19 -- common/autotest_common.sh@955 -- # kill 82924 00:13:56.171 15:00:19 -- common/autotest_common.sh@960 -- # wait 82924 00:13:56.430 [2024-11-18 15:00:19.862358] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:56.430 15:00:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:56.430 15:00:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:56.430 15:00:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:56.430 15:00:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.430 15:00:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:56.430 15:00:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.430 15:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.430 15:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.430 15:00:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:56.430 00:13:56.430 real 0m5.358s 00:13:56.430 user 0m22.492s 00:13:56.430 sys 0m1.339s 00:13:56.430 15:00:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:56.430 ************************************ 00:13:56.430 END TEST nvmf_host_management 00:13:56.430 15:00:19 -- common/autotest_common.sh@10 -- # set +x 00:13:56.430 ************************************ 00:13:56.430 15:00:19 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:56.430 00:13:56.430 real 0m5.955s 00:13:56.430 user 0m22.688s 00:13:56.430 sys 0m1.606s 00:13:56.430 15:00:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:56.430 ************************************ 00:13:56.430 15:00:19 -- common/autotest_common.sh@10 -- # set +x 00:13:56.430 END TEST nvmf_host_management 00:13:56.430 ************************************ 00:13:56.430 15:00:19 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:56.430 15:00:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:56.430 15:00:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:56.430 15:00:20 -- common/autotest_common.sh@10 -- # set +x 00:13:56.430 ************************************ 00:13:56.430 START TEST nvmf_lvol 00:13:56.430 ************************************ 00:13:56.430 15:00:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:56.690 * Looking for test storage... 00:13:56.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:56.690 15:00:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:56.690 15:00:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:56.690 15:00:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:56.690 15:00:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:56.690 15:00:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:56.690 15:00:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:56.690 15:00:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:56.690 15:00:20 -- scripts/common.sh@335 -- # IFS=.-: 00:13:56.690 15:00:20 -- scripts/common.sh@335 -- # read -ra ver1 00:13:56.690 15:00:20 -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.690 15:00:20 -- scripts/common.sh@336 -- # read -ra ver2 00:13:56.690 15:00:20 -- scripts/common.sh@337 -- # local 'op=<' 00:13:56.690 15:00:20 -- scripts/common.sh@339 -- # ver1_l=2 00:13:56.690 15:00:20 -- scripts/common.sh@340 -- # ver2_l=1 00:13:56.690 15:00:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:56.690 15:00:20 -- scripts/common.sh@343 -- # case "$op" in 00:13:56.690 15:00:20 -- scripts/common.sh@344 -- # : 1 00:13:56.690 15:00:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:56.690 15:00:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.690 15:00:20 -- scripts/common.sh@364 -- # decimal 1 00:13:56.690 15:00:20 -- scripts/common.sh@352 -- # local d=1 00:13:56.690 15:00:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.690 15:00:20 -- scripts/common.sh@354 -- # echo 1 00:13:56.690 15:00:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:56.690 15:00:20 -- scripts/common.sh@365 -- # decimal 2 00:13:56.690 15:00:20 -- scripts/common.sh@352 -- # local d=2 00:13:56.690 15:00:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.690 15:00:20 -- scripts/common.sh@354 -- # echo 2 00:13:56.690 15:00:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:56.690 15:00:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:56.690 15:00:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:56.690 15:00:20 -- scripts/common.sh@367 -- # return 0 00:13:56.690 15:00:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.690 15:00:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:56.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.690 --rc genhtml_branch_coverage=1 00:13:56.690 --rc genhtml_function_coverage=1 00:13:56.690 --rc genhtml_legend=1 00:13:56.690 --rc geninfo_all_blocks=1 00:13:56.690 --rc geninfo_unexecuted_blocks=1 00:13:56.690 00:13:56.690 ' 00:13:56.690 15:00:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:56.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.690 --rc genhtml_branch_coverage=1 00:13:56.690 --rc genhtml_function_coverage=1 00:13:56.690 --rc genhtml_legend=1 00:13:56.690 --rc geninfo_all_blocks=1 00:13:56.690 --rc geninfo_unexecuted_blocks=1 00:13:56.690 00:13:56.690 ' 00:13:56.690 15:00:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:56.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.691 --rc genhtml_branch_coverage=1 00:13:56.691 --rc genhtml_function_coverage=1 00:13:56.691 --rc genhtml_legend=1 00:13:56.691 --rc geninfo_all_blocks=1 00:13:56.691 --rc geninfo_unexecuted_blocks=1 00:13:56.691 00:13:56.691 ' 00:13:56.691 15:00:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:56.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.691 --rc genhtml_branch_coverage=1 00:13:56.691 --rc genhtml_function_coverage=1 00:13:56.691 --rc genhtml_legend=1 00:13:56.691 --rc geninfo_all_blocks=1 00:13:56.691 --rc geninfo_unexecuted_blocks=1 00:13:56.691 00:13:56.691 ' 00:13:56.691 15:00:20 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.691 15:00:20 -- nvmf/common.sh@7 -- # uname -s 00:13:56.691 15:00:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.691 15:00:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.691 15:00:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.691 15:00:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.691 15:00:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.691 15:00:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.691 15:00:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.691 15:00:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.691 15:00:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.691 15:00:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.691 15:00:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:56.691 15:00:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:13:56.691 15:00:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.691 15:00:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.691 15:00:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:56.691 15:00:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.691 15:00:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.691 15:00:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.691 15:00:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.691 15:00:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.691 15:00:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.691 15:00:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.691 15:00:20 -- paths/export.sh@5 -- # export PATH 00:13:56.691 15:00:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.691 15:00:20 -- nvmf/common.sh@46 -- # : 0 00:13:56.691 15:00:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:56.691 15:00:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:56.691 15:00:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:56.691 15:00:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.691 15:00:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.691 15:00:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:56.691 15:00:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:56.691 15:00:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:56.691 15:00:20 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:56.691 15:00:20 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:56.691 15:00:20 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:56.691 15:00:20 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:56.691 15:00:20 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:56.691 15:00:20 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:56.691 15:00:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:56.691 15:00:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.691 15:00:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:56.691 15:00:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:56.691 15:00:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:56.691 15:00:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.691 15:00:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.691 15:00:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.691 15:00:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:56.691 15:00:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:56.691 15:00:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:56.691 15:00:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:56.691 15:00:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:56.691 15:00:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:56.691 15:00:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.691 15:00:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.691 15:00:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:56.691 15:00:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:56.691 15:00:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:56.691 15:00:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:56.691 15:00:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:56.691 15:00:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.691 15:00:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:56.691 15:00:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:56.691 15:00:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:56.691 15:00:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:56.691 15:00:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:56.691 15:00:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:56.691 Cannot find device "nvmf_tgt_br" 00:13:56.691 15:00:20 -- nvmf/common.sh@154 -- # true 00:13:56.691 15:00:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:56.691 Cannot find device "nvmf_tgt_br2" 00:13:56.691 15:00:20 -- nvmf/common.sh@155 -- # true 00:13:56.691 15:00:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:56.691 15:00:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:56.950 Cannot find device "nvmf_tgt_br" 00:13:56.950 15:00:20 -- nvmf/common.sh@157 -- # true 00:13:56.950 15:00:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:56.950 Cannot find device "nvmf_tgt_br2" 00:13:56.950 15:00:20 -- nvmf/common.sh@158 -- # true 00:13:56.950 15:00:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:56.950 15:00:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:56.950 15:00:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.950 15:00:20 -- nvmf/common.sh@161 -- # true 00:13:56.950 15:00:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.950 15:00:20 -- nvmf/common.sh@162 -- # true 00:13:56.950 15:00:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.950 15:00:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.950 15:00:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.950 15:00:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.950 15:00:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.950 15:00:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.950 15:00:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.950 15:00:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:56.950 15:00:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:56.950 15:00:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:56.950 15:00:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:56.950 15:00:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:56.950 15:00:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:56.950 15:00:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.950 15:00:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.950 15:00:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.950 15:00:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:56.951 15:00:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:56.951 15:00:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.951 15:00:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.951 15:00:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:57.210 15:00:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:57.210 15:00:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:57.210 15:00:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:57.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:13:57.210 00:13:57.210 --- 10.0.0.2 ping statistics --- 00:13:57.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.210 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:57.210 15:00:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:57.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:57.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:13:57.210 00:13:57.210 --- 10.0.0.3 ping statistics --- 00:13:57.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.210 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:57.210 15:00:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:57.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:57.210 00:13:57.210 --- 10.0.0.1 ping statistics --- 00:13:57.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.210 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:57.210 15:00:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.210 15:00:20 -- nvmf/common.sh@421 -- # return 0 00:13:57.210 15:00:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:57.210 15:00:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.210 15:00:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:57.210 15:00:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:57.210 15:00:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.210 15:00:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:57.210 15:00:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:57.210 15:00:20 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:57.210 15:00:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:57.210 15:00:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:57.210 15:00:20 -- common/autotest_common.sh@10 -- # set +x 00:13:57.210 15:00:20 -- nvmf/common.sh@469 -- # nvmfpid=83279 00:13:57.210 15:00:20 -- nvmf/common.sh@470 -- # waitforlisten 83279 00:13:57.210 15:00:20 -- common/autotest_common.sh@829 -- # '[' -z 83279 ']' 00:13:57.210 15:00:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:57.210 15:00:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.210 15:00:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.210 15:00:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.210 15:00:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.210 15:00:20 -- common/autotest_common.sh@10 -- # set +x 00:13:57.210 [2024-11-18 15:00:20.649777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:57.210 [2024-11-18 15:00:20.649863] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.210 [2024-11-18 15:00:20.791038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:57.469 [2024-11-18 15:00:20.875661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:57.469 [2024-11-18 15:00:20.875863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.469 [2024-11-18 15:00:20.875881] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.469 [2024-11-18 15:00:20.875892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.469 [2024-11-18 15:00:20.876187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.469 [2024-11-18 15:00:20.876639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.469 [2024-11-18 15:00:20.876652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.037 15:00:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.037 15:00:21 -- common/autotest_common.sh@862 -- # return 0 00:13:58.037 15:00:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:58.037 15:00:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:58.037 15:00:21 -- common/autotest_common.sh@10 -- # set +x 00:13:58.295 15:00:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.295 15:00:21 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:58.553 [2024-11-18 15:00:21.912189] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.553 15:00:21 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.813 15:00:22 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:58.813 15:00:22 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.072 15:00:22 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:59.072 15:00:22 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:59.331 15:00:22 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:59.590 15:00:23 -- target/nvmf_lvol.sh@29 -- # lvs=df2347d3-f7a3-4fc6-a9ff-7b7934ca96e2 00:13:59.590 15:00:23 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u df2347d3-f7a3-4fc6-a9ff-7b7934ca96e2 lvol 20 00:13:59.857 15:00:23 -- target/nvmf_lvol.sh@32 -- # lvol=614c27f5-35e0-474f-b32b-eea3d1bfeb7f 00:13:59.857 15:00:23 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:00.172 15:00:23 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 614c27f5-35e0-474f-b32b-eea3d1bfeb7f 00:14:00.437 15:00:23 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:00.695 [2024-11-18 15:00:24.064604] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.695 15:00:24 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:00.954 15:00:24 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:00.954 15:00:24 -- target/nvmf_lvol.sh@42 -- # perf_pid=83432 00:14:00.954 15:00:24 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:01.890 15:00:25 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 614c27f5-35e0-474f-b32b-eea3d1bfeb7f MY_SNAPSHOT 00:14:02.148 15:00:25 -- target/nvmf_lvol.sh@47 -- # snapshot=209b7d61-4262-4f3e-9939-362856867825 00:14:02.148 15:00:25 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 614c27f5-35e0-474f-b32b-eea3d1bfeb7f 30 00:14:02.715 15:00:26 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 209b7d61-4262-4f3e-9939-362856867825 MY_CLONE 00:14:02.974 15:00:26 -- target/nvmf_lvol.sh@49 -- # clone=7e7d76d1-80d2-4748-a129-c542d74bba93 00:14:02.974 15:00:26 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7e7d76d1-80d2-4748-a129-c542d74bba93 00:14:03.542 15:00:27 -- target/nvmf_lvol.sh@53 -- # wait 83432 00:14:11.658 Initializing NVMe Controllers 00:14:11.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:11.659 Controller IO queue size 128, less than required. 00:14:11.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:11.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:11.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:11.659 Initialization complete. Launching workers. 00:14:11.659 ======================================================== 00:14:11.659 Latency(us) 00:14:11.659 Device Information : IOPS MiB/s Average min max 00:14:11.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7914.90 30.92 16173.76 2781.72 78481.76 00:14:11.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7636.60 29.83 16766.56 2967.75 93819.53 00:14:11.659 ======================================================== 00:14:11.659 Total : 15551.50 60.75 16464.86 2781.72 93819.53 00:14:11.659 00:14:11.659 15:00:34 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:11.659 15:00:34 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 614c27f5-35e0-474f-b32b-eea3d1bfeb7f 00:14:11.659 15:00:35 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df2347d3-f7a3-4fc6-a9ff-7b7934ca96e2 00:14:11.918 15:00:35 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:11.918 15:00:35 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:11.918 15:00:35 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:11.918 15:00:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:11.918 15:00:35 -- nvmf/common.sh@116 -- # sync 00:14:11.918 15:00:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:11.918 15:00:35 -- nvmf/common.sh@119 -- # set +e 00:14:11.918 15:00:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:11.918 15:00:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:11.918 rmmod nvme_tcp 00:14:11.918 rmmod nvme_fabrics 00:14:11.918 rmmod nvme_keyring 00:14:11.918 15:00:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:11.918 15:00:35 -- nvmf/common.sh@123 -- # set -e 00:14:11.918 15:00:35 -- nvmf/common.sh@124 -- # return 0 00:14:11.918 15:00:35 -- nvmf/common.sh@477 -- # '[' -n 83279 ']' 00:14:11.918 15:00:35 -- nvmf/common.sh@478 -- # killprocess 83279 00:14:11.918 15:00:35 -- common/autotest_common.sh@936 -- # '[' -z 83279 ']' 00:14:11.918 15:00:35 -- common/autotest_common.sh@940 -- # kill -0 83279 00:14:11.918 15:00:35 -- common/autotest_common.sh@941 -- # uname 00:14:11.918 15:00:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.918 15:00:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83279 00:14:11.918 killing process with pid 83279 00:14:11.918 15:00:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:11.918 15:00:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:11.918 15:00:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83279' 00:14:11.918 15:00:35 -- common/autotest_common.sh@955 -- # kill 83279 00:14:11.918 15:00:35 -- common/autotest_common.sh@960 -- # wait 83279 00:14:12.486 15:00:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:12.486 15:00:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:12.486 15:00:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:12.486 15:00:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.486 15:00:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:12.486 15:00:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.486 15:00:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.486 15:00:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.486 15:00:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:12.486 00:14:12.486 real 0m15.805s 00:14:12.486 user 1m6.473s 00:14:12.486 sys 0m3.166s 00:14:12.486 15:00:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:12.486 15:00:35 -- common/autotest_common.sh@10 -- # set +x 00:14:12.486 ************************************ 00:14:12.486 END TEST nvmf_lvol 00:14:12.486 ************************************ 00:14:12.486 15:00:35 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:12.486 15:00:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:12.486 15:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.486 15:00:35 -- common/autotest_common.sh@10 -- # set +x 00:14:12.486 ************************************ 00:14:12.486 START TEST nvmf_lvs_grow 00:14:12.486 ************************************ 00:14:12.486 15:00:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:12.486 * Looking for test storage... 00:14:12.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:12.486 15:00:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:12.486 15:00:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:12.486 15:00:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:12.486 15:00:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:12.486 15:00:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:12.486 15:00:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:12.486 15:00:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:12.486 15:00:36 -- scripts/common.sh@335 -- # IFS=.-: 00:14:12.486 15:00:36 -- scripts/common.sh@335 -- # read -ra ver1 00:14:12.486 15:00:36 -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.486 15:00:36 -- scripts/common.sh@336 -- # read -ra ver2 00:14:12.486 15:00:36 -- scripts/common.sh@337 -- # local 'op=<' 00:14:12.486 15:00:36 -- scripts/common.sh@339 -- # ver1_l=2 00:14:12.486 15:00:36 -- scripts/common.sh@340 -- # ver2_l=1 00:14:12.486 15:00:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:12.486 15:00:36 -- scripts/common.sh@343 -- # case "$op" in 00:14:12.486 15:00:36 -- scripts/common.sh@344 -- # : 1 00:14:12.486 15:00:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:12.487 15:00:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.487 15:00:36 -- scripts/common.sh@364 -- # decimal 1 00:14:12.487 15:00:36 -- scripts/common.sh@352 -- # local d=1 00:14:12.487 15:00:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.487 15:00:36 -- scripts/common.sh@354 -- # echo 1 00:14:12.487 15:00:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:12.487 15:00:36 -- scripts/common.sh@365 -- # decimal 2 00:14:12.487 15:00:36 -- scripts/common.sh@352 -- # local d=2 00:14:12.487 15:00:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.487 15:00:36 -- scripts/common.sh@354 -- # echo 2 00:14:12.487 15:00:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:12.487 15:00:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:12.487 15:00:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:12.487 15:00:36 -- scripts/common.sh@367 -- # return 0 00:14:12.487 15:00:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.487 15:00:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:12.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.487 --rc genhtml_branch_coverage=1 00:14:12.487 --rc genhtml_function_coverage=1 00:14:12.487 --rc genhtml_legend=1 00:14:12.487 --rc geninfo_all_blocks=1 00:14:12.487 --rc geninfo_unexecuted_blocks=1 00:14:12.487 00:14:12.487 ' 00:14:12.487 15:00:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:12.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.487 --rc genhtml_branch_coverage=1 00:14:12.487 --rc genhtml_function_coverage=1 00:14:12.487 --rc genhtml_legend=1 00:14:12.487 --rc geninfo_all_blocks=1 00:14:12.487 --rc geninfo_unexecuted_blocks=1 00:14:12.487 00:14:12.487 ' 00:14:12.487 15:00:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:12.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.487 --rc genhtml_branch_coverage=1 00:14:12.487 --rc genhtml_function_coverage=1 00:14:12.487 --rc genhtml_legend=1 00:14:12.487 --rc geninfo_all_blocks=1 00:14:12.487 --rc geninfo_unexecuted_blocks=1 00:14:12.487 00:14:12.487 ' 00:14:12.487 15:00:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:12.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.487 --rc genhtml_branch_coverage=1 00:14:12.487 --rc genhtml_function_coverage=1 00:14:12.487 --rc genhtml_legend=1 00:14:12.487 --rc geninfo_all_blocks=1 00:14:12.487 --rc geninfo_unexecuted_blocks=1 00:14:12.487 00:14:12.487 ' 00:14:12.487 15:00:36 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.487 15:00:36 -- nvmf/common.sh@7 -- # uname -s 00:14:12.487 15:00:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.487 15:00:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.487 15:00:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.487 15:00:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.487 15:00:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.487 15:00:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.487 15:00:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.487 15:00:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.487 15:00:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.487 15:00:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.487 15:00:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:14:12.487 15:00:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:14:12.487 15:00:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.487 15:00:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.487 15:00:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.487 15:00:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.487 15:00:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.487 15:00:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.487 15:00:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.487 15:00:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.487 15:00:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.487 15:00:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.487 15:00:36 -- paths/export.sh@5 -- # export PATH 00:14:12.487 15:00:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.746 15:00:36 -- nvmf/common.sh@46 -- # : 0 00:14:12.746 15:00:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:12.746 15:00:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:12.746 15:00:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:12.746 15:00:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.746 15:00:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.746 15:00:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:12.746 15:00:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:12.746 15:00:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:12.746 15:00:36 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.746 15:00:36 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:12.746 15:00:36 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:12.746 15:00:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:12.746 15:00:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.746 15:00:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:12.746 15:00:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:12.746 15:00:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:12.746 15:00:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.746 15:00:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.746 15:00:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.746 15:00:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:12.746 15:00:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:12.746 15:00:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:12.747 15:00:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:12.747 15:00:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:12.747 15:00:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:12.747 15:00:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.747 15:00:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.747 15:00:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:12.747 15:00:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:12.747 15:00:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:12.747 15:00:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:12.747 15:00:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:12.747 15:00:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.747 15:00:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:12.747 15:00:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:12.747 15:00:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:12.747 15:00:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:12.747 15:00:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:12.747 15:00:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:12.747 Cannot find device "nvmf_tgt_br" 00:14:12.747 15:00:36 -- nvmf/common.sh@154 -- # true 00:14:12.747 15:00:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.747 Cannot find device "nvmf_tgt_br2" 00:14:12.747 15:00:36 -- nvmf/common.sh@155 -- # true 00:14:12.747 15:00:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:12.747 15:00:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:12.747 Cannot find device "nvmf_tgt_br" 00:14:12.747 15:00:36 -- nvmf/common.sh@157 -- # true 00:14:12.747 15:00:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:12.747 Cannot find device "nvmf_tgt_br2" 00:14:12.747 15:00:36 -- nvmf/common.sh@158 -- # true 00:14:12.747 15:00:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:12.747 15:00:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:12.747 15:00:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.747 15:00:36 -- nvmf/common.sh@161 -- # true 00:14:12.747 15:00:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.747 15:00:36 -- nvmf/common.sh@162 -- # true 00:14:12.747 15:00:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:12.747 15:00:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.747 15:00:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.747 15:00:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.747 15:00:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.747 15:00:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.747 15:00:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.747 15:00:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:12.747 15:00:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:12.747 15:00:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:12.747 15:00:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:12.747 15:00:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:12.747 15:00:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:12.747 15:00:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.747 15:00:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.747 15:00:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.006 15:00:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:13.006 15:00:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:13.006 15:00:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.006 15:00:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.006 15:00:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.006 15:00:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.006 15:00:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.006 15:00:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:13.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:14:13.006 00:14:13.006 --- 10.0.0.2 ping statistics --- 00:14:13.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.006 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:13.006 15:00:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:13.006 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.006 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:13.006 00:14:13.006 --- 10.0.0.3 ping statistics --- 00:14:13.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.006 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:13.006 15:00:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:13.006 00:14:13.006 --- 10.0.0.1 ping statistics --- 00:14:13.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.006 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:13.006 15:00:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.006 15:00:36 -- nvmf/common.sh@421 -- # return 0 00:14:13.006 15:00:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:13.006 15:00:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.006 15:00:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:13.006 15:00:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:13.006 15:00:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.006 15:00:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:13.006 15:00:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:13.006 15:00:36 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:13.006 15:00:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:13.006 15:00:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.006 15:00:36 -- common/autotest_common.sh@10 -- # set +x 00:14:13.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.006 15:00:36 -- nvmf/common.sh@469 -- # nvmfpid=83798 00:14:13.006 15:00:36 -- nvmf/common.sh@470 -- # waitforlisten 83798 00:14:13.006 15:00:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:13.006 15:00:36 -- common/autotest_common.sh@829 -- # '[' -z 83798 ']' 00:14:13.006 15:00:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.006 15:00:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.006 15:00:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.006 15:00:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.006 15:00:36 -- common/autotest_common.sh@10 -- # set +x 00:14:13.006 [2024-11-18 15:00:36.479275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.006 [2024-11-18 15:00:36.479400] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.265 [2024-11-18 15:00:36.616870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.265 [2024-11-18 15:00:36.687609] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:13.265 [2024-11-18 15:00:36.687796] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.265 [2024-11-18 15:00:36.687814] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.265 [2024-11-18 15:00:36.687826] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.265 [2024-11-18 15:00:36.687866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.832 15:00:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.832 15:00:37 -- common/autotest_common.sh@862 -- # return 0 00:14:13.832 15:00:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:13.832 15:00:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.832 15:00:37 -- common/autotest_common.sh@10 -- # set +x 00:14:14.091 15:00:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.091 15:00:37 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:14.350 [2024-11-18 15:00:37.702072] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:14.350 15:00:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:14.350 15:00:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.350 15:00:37 -- common/autotest_common.sh@10 -- # set +x 00:14:14.350 ************************************ 00:14:14.350 START TEST lvs_grow_clean 00:14:14.350 ************************************ 00:14:14.350 15:00:37 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:14.350 15:00:37 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:14.608 15:00:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:14.608 15:00:38 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:14.866 15:00:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=870217f2-41cb-407d-8427-286068f8a167 00:14:14.866 15:00:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:14.866 15:00:38 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:15.125 15:00:38 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:15.125 15:00:38 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:15.125 15:00:38 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 870217f2-41cb-407d-8427-286068f8a167 lvol 150 00:14:15.383 15:00:38 -- target/nvmf_lvs_grow.sh@33 -- # lvol=53c6a543-9800-4466-97b0-005ea4c19cac 00:14:15.383 15:00:38 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:15.383 15:00:38 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:15.642 [2024-11-18 15:00:39.115008] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:15.642 [2024-11-18 15:00:39.115081] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:15.642 true 00:14:15.642 15:00:39 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:15.642 15:00:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:15.900 15:00:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:15.900 15:00:39 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:16.158 15:00:39 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 53c6a543-9800-4466-97b0-005ea4c19cac 00:14:16.416 15:00:39 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:16.674 [2024-11-18 15:00:40.019548] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.674 15:00:40 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:16.674 15:00:40 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:16.674 15:00:40 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83961 00:14:16.674 15:00:40 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:16.674 15:00:40 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83961 /var/tmp/bdevperf.sock 00:14:16.674 15:00:40 -- common/autotest_common.sh@829 -- # '[' -z 83961 ']' 00:14:16.674 15:00:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.674 15:00:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.674 15:00:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.674 15:00:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.674 15:00:40 -- common/autotest_common.sh@10 -- # set +x 00:14:16.932 [2024-11-18 15:00:40.287395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:16.932 [2024-11-18 15:00:40.287482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83961 ] 00:14:16.932 [2024-11-18 15:00:40.423568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.932 [2024-11-18 15:00:40.485729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.867 15:00:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.867 15:00:41 -- common/autotest_common.sh@862 -- # return 0 00:14:17.867 15:00:41 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:18.126 Nvme0n1 00:14:18.126 15:00:41 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:18.385 [ 00:14:18.385 { 00:14:18.385 "aliases": [ 00:14:18.385 "53c6a543-9800-4466-97b0-005ea4c19cac" 00:14:18.385 ], 00:14:18.385 "assigned_rate_limits": { 00:14:18.385 "r_mbytes_per_sec": 0, 00:14:18.385 "rw_ios_per_sec": 0, 00:14:18.385 "rw_mbytes_per_sec": 0, 00:14:18.385 "w_mbytes_per_sec": 0 00:14:18.385 }, 00:14:18.385 "block_size": 4096, 00:14:18.385 "claimed": false, 00:14:18.385 "driver_specific": { 00:14:18.385 "mp_policy": "active_passive", 00:14:18.385 "nvme": [ 00:14:18.385 { 00:14:18.385 "ctrlr_data": { 00:14:18.385 "ana_reporting": false, 00:14:18.385 "cntlid": 1, 00:14:18.385 "firmware_revision": "24.01.1", 00:14:18.385 "model_number": "SPDK bdev Controller", 00:14:18.385 "multi_ctrlr": true, 00:14:18.385 "oacs": { 00:14:18.385 "firmware": 0, 00:14:18.385 "format": 0, 00:14:18.385 "ns_manage": 0, 00:14:18.385 "security": 0 00:14:18.385 }, 00:14:18.385 "serial_number": "SPDK0", 00:14:18.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:18.385 "vendor_id": "0x8086" 00:14:18.385 }, 00:14:18.385 "ns_data": { 00:14:18.385 "can_share": true, 00:14:18.385 "id": 1 00:14:18.385 }, 00:14:18.385 "trid": { 00:14:18.385 "adrfam": "IPv4", 00:14:18.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:18.385 "traddr": "10.0.0.2", 00:14:18.385 "trsvcid": "4420", 00:14:18.385 "trtype": "TCP" 00:14:18.385 }, 00:14:18.385 "vs": { 00:14:18.385 "nvme_version": "1.3" 00:14:18.385 } 00:14:18.385 } 00:14:18.385 ] 00:14:18.385 }, 00:14:18.385 "name": "Nvme0n1", 00:14:18.385 "num_blocks": 38912, 00:14:18.385 "product_name": "NVMe disk", 00:14:18.385 "supported_io_types": { 00:14:18.385 "abort": true, 00:14:18.385 "compare": true, 00:14:18.385 "compare_and_write": true, 00:14:18.385 "flush": true, 00:14:18.385 "nvme_admin": true, 00:14:18.385 "nvme_io": true, 00:14:18.385 "read": true, 00:14:18.385 "reset": true, 00:14:18.385 "unmap": true, 00:14:18.385 "write": true, 00:14:18.385 "write_zeroes": true 00:14:18.385 }, 00:14:18.385 "uuid": "53c6a543-9800-4466-97b0-005ea4c19cac", 00:14:18.385 "zoned": false 00:14:18.385 } 00:14:18.385 ] 00:14:18.385 15:00:41 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84010 00:14:18.385 15:00:41 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:18.385 15:00:41 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:18.385 Running I/O for 10 seconds... 00:14:19.761 Latency(us) 00:14:19.761 [2024-11-18T15:00:43.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.761 [2024-11-18T15:00:43.351Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.761 Nvme0n1 : 1.00 7309.00 28.55 0.00 0.00 0.00 0.00 0.00 00:14:19.761 [2024-11-18T15:00:43.351Z] =================================================================================================================== 00:14:19.761 [2024-11-18T15:00:43.351Z] Total : 7309.00 28.55 0.00 0.00 0.00 0.00 0.00 00:14:19.761 00:14:20.328 15:00:43 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 870217f2-41cb-407d-8427-286068f8a167 00:14:20.586 [2024-11-18T15:00:44.176Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.586 Nvme0n1 : 2.00 7317.00 28.58 0.00 0.00 0.00 0.00 0.00 00:14:20.586 [2024-11-18T15:00:44.176Z] =================================================================================================================== 00:14:20.586 [2024-11-18T15:00:44.176Z] Total : 7317.00 28.58 0.00 0.00 0.00 0.00 0.00 00:14:20.586 00:14:20.586 true 00:14:20.845 15:00:44 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:20.845 15:00:44 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:21.103 15:00:44 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:21.103 15:00:44 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:21.103 15:00:44 -- target/nvmf_lvs_grow.sh@65 -- # wait 84010 00:14:21.362 [2024-11-18T15:00:44.952Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.362 Nvme0n1 : 3.00 7286.00 28.46 0.00 0.00 0.00 0.00 0.00 00:14:21.362 [2024-11-18T15:00:44.952Z] =================================================================================================================== 00:14:21.362 [2024-11-18T15:00:44.952Z] Total : 7286.00 28.46 0.00 0.00 0.00 0.00 0.00 00:14:21.362 00:14:22.738 [2024-11-18T15:00:46.328Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.738 Nvme0n1 : 4.00 7273.50 28.41 0.00 0.00 0.00 0.00 0.00 00:14:22.738 [2024-11-18T15:00:46.328Z] =================================================================================================================== 00:14:22.738 [2024-11-18T15:00:46.328Z] Total : 7273.50 28.41 0.00 0.00 0.00 0.00 0.00 00:14:22.738 00:14:23.673 [2024-11-18T15:00:47.263Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.673 Nvme0n1 : 5.00 7237.60 28.27 0.00 0.00 0.00 0.00 0.00 00:14:23.673 [2024-11-18T15:00:47.263Z] =================================================================================================================== 00:14:23.673 [2024-11-18T15:00:47.263Z] Total : 7237.60 28.27 0.00 0.00 0.00 0.00 0.00 00:14:23.673 00:14:24.609 [2024-11-18T15:00:48.199Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.609 Nvme0n1 : 6.00 7213.67 28.18 0.00 0.00 0.00 0.00 0.00 00:14:24.609 [2024-11-18T15:00:48.199Z] =================================================================================================================== 00:14:24.609 [2024-11-18T15:00:48.199Z] Total : 7213.67 28.18 0.00 0.00 0.00 0.00 0.00 00:14:24.609 00:14:25.545 [2024-11-18T15:00:49.135Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.545 Nvme0n1 : 7.00 7212.00 28.17 0.00 0.00 0.00 0.00 0.00 00:14:25.545 [2024-11-18T15:00:49.135Z] =================================================================================================================== 00:14:25.545 [2024-11-18T15:00:49.135Z] Total : 7212.00 28.17 0.00 0.00 0.00 0.00 0.00 00:14:25.545 00:14:26.480 [2024-11-18T15:00:50.070Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.480 Nvme0n1 : 8.00 7129.75 27.85 0.00 0.00 0.00 0.00 0.00 00:14:26.480 [2024-11-18T15:00:50.070Z] =================================================================================================================== 00:14:26.480 [2024-11-18T15:00:50.070Z] Total : 7129.75 27.85 0.00 0.00 0.00 0.00 0.00 00:14:26.480 00:14:27.416 [2024-11-18T15:00:51.006Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.416 Nvme0n1 : 9.00 7122.33 27.82 0.00 0.00 0.00 0.00 0.00 00:14:27.416 [2024-11-18T15:00:51.006Z] =================================================================================================================== 00:14:27.416 [2024-11-18T15:00:51.006Z] Total : 7122.33 27.82 0.00 0.00 0.00 0.00 0.00 00:14:27.416 00:14:28.794 [2024-11-18T15:00:52.384Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.794 Nvme0n1 : 10.00 7147.40 27.92 0.00 0.00 0.00 0.00 0.00 00:14:28.794 [2024-11-18T15:00:52.384Z] =================================================================================================================== 00:14:28.794 [2024-11-18T15:00:52.384Z] Total : 7147.40 27.92 0.00 0.00 0.00 0.00 0.00 00:14:28.794 00:14:28.794 00:14:28.794 Latency(us) 00:14:28.794 [2024-11-18T15:00:52.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.794 [2024-11-18T15:00:52.384Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.794 Nvme0n1 : 10.01 7153.58 27.94 0.00 0.00 17888.24 8519.68 138221.38 00:14:28.794 [2024-11-18T15:00:52.384Z] =================================================================================================================== 00:14:28.794 [2024-11-18T15:00:52.384Z] Total : 7153.58 27.94 0.00 0.00 17888.24 8519.68 138221.38 00:14:28.794 0 00:14:28.794 15:00:51 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83961 00:14:28.794 15:00:51 -- common/autotest_common.sh@936 -- # '[' -z 83961 ']' 00:14:28.794 15:00:51 -- common/autotest_common.sh@940 -- # kill -0 83961 00:14:28.794 15:00:51 -- common/autotest_common.sh@941 -- # uname 00:14:28.795 15:00:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.795 15:00:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83961 00:14:28.795 15:00:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:28.795 15:00:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:28.795 killing process with pid 83961 00:14:28.795 15:00:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83961' 00:14:28.795 15:00:52 -- common/autotest_common.sh@955 -- # kill 83961 00:14:28.795 Received shutdown signal, test time was about 10.000000 seconds 00:14:28.795 00:14:28.795 Latency(us) 00:14:28.795 [2024-11-18T15:00:52.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.795 [2024-11-18T15:00:52.385Z] =================================================================================================================== 00:14:28.795 [2024-11-18T15:00:52.385Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:28.795 15:00:52 -- common/autotest_common.sh@960 -- # wait 83961 00:14:28.795 15:00:52 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:29.053 15:00:52 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:29.053 15:00:52 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:29.312 15:00:52 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:29.312 15:00:52 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:29.312 15:00:52 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:29.571 [2024-11-18 15:00:53.008992] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:29.571 15:00:53 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:29.571 15:00:53 -- common/autotest_common.sh@650 -- # local es=0 00:14:29.571 15:00:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:29.571 15:00:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.571 15:00:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.571 15:00:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.571 15:00:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.571 15:00:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.571 15:00:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.571 15:00:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.571 15:00:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:29.571 15:00:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:29.830 2024/11/18 15:00:53 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:870217f2-41cb-407d-8427-286068f8a167], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:29.830 request: 00:14:29.830 { 00:14:29.830 "method": "bdev_lvol_get_lvstores", 00:14:29.830 "params": { 00:14:29.830 "uuid": "870217f2-41cb-407d-8427-286068f8a167" 00:14:29.830 } 00:14:29.830 } 00:14:29.830 Got JSON-RPC error response 00:14:29.830 GoRPCClient: error on JSON-RPC call 00:14:29.830 15:00:53 -- common/autotest_common.sh@653 -- # es=1 00:14:29.830 15:00:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:29.830 15:00:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:29.830 15:00:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:29.830 15:00:53 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:30.089 aio_bdev 00:14:30.089 15:00:53 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 53c6a543-9800-4466-97b0-005ea4c19cac 00:14:30.089 15:00:53 -- common/autotest_common.sh@897 -- # local bdev_name=53c6a543-9800-4466-97b0-005ea4c19cac 00:14:30.089 15:00:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:30.089 15:00:53 -- common/autotest_common.sh@899 -- # local i 00:14:30.089 15:00:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:30.089 15:00:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:30.089 15:00:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:30.347 15:00:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53c6a543-9800-4466-97b0-005ea4c19cac -t 2000 00:14:30.347 [ 00:14:30.347 { 00:14:30.347 "aliases": [ 00:14:30.347 "lvs/lvol" 00:14:30.347 ], 00:14:30.347 "assigned_rate_limits": { 00:14:30.347 "r_mbytes_per_sec": 0, 00:14:30.347 "rw_ios_per_sec": 0, 00:14:30.347 "rw_mbytes_per_sec": 0, 00:14:30.347 "w_mbytes_per_sec": 0 00:14:30.347 }, 00:14:30.347 "block_size": 4096, 00:14:30.347 "claimed": false, 00:14:30.347 "driver_specific": { 00:14:30.347 "lvol": { 00:14:30.347 "base_bdev": "aio_bdev", 00:14:30.347 "clone": false, 00:14:30.347 "esnap_clone": false, 00:14:30.347 "lvol_store_uuid": "870217f2-41cb-407d-8427-286068f8a167", 00:14:30.347 "snapshot": false, 00:14:30.347 "thin_provision": false 00:14:30.347 } 00:14:30.347 }, 00:14:30.347 "name": "53c6a543-9800-4466-97b0-005ea4c19cac", 00:14:30.347 "num_blocks": 38912, 00:14:30.347 "product_name": "Logical Volume", 00:14:30.347 "supported_io_types": { 00:14:30.347 "abort": false, 00:14:30.347 "compare": false, 00:14:30.347 "compare_and_write": false, 00:14:30.347 "flush": false, 00:14:30.347 "nvme_admin": false, 00:14:30.347 "nvme_io": false, 00:14:30.347 "read": true, 00:14:30.347 "reset": true, 00:14:30.347 "unmap": true, 00:14:30.347 "write": true, 00:14:30.347 "write_zeroes": true 00:14:30.347 }, 00:14:30.347 "uuid": "53c6a543-9800-4466-97b0-005ea4c19cac", 00:14:30.347 "zoned": false 00:14:30.348 } 00:14:30.348 ] 00:14:30.348 15:00:53 -- common/autotest_common.sh@905 -- # return 0 00:14:30.348 15:00:53 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:30.348 15:00:53 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:30.607 15:00:54 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:30.607 15:00:54 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 870217f2-41cb-407d-8427-286068f8a167 00:14:30.607 15:00:54 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:30.865 15:00:54 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:30.865 15:00:54 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 53c6a543-9800-4466-97b0-005ea4c19cac 00:14:31.124 15:00:54 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 870217f2-41cb-407d-8427-286068f8a167 00:14:31.388 15:00:54 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:31.663 15:00:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:31.935 ************************************ 00:14:31.935 END TEST lvs_grow_clean 00:14:31.935 ************************************ 00:14:31.935 00:14:31.935 real 0m17.754s 00:14:31.935 user 0m17.153s 00:14:31.935 sys 0m2.152s 00:14:31.935 15:00:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:31.935 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:32.194 15:00:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:32.194 15:00:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.194 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:14:32.194 ************************************ 00:14:32.194 START TEST lvs_grow_dirty 00:14:32.194 ************************************ 00:14:32.194 15:00:55 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:32.194 15:00:55 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:32.453 15:00:55 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:32.453 15:00:55 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:32.712 15:00:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:32.712 15:00:56 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:32.712 15:00:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:32.712 15:00:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:32.712 15:00:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:32.712 15:00:56 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4b322b97-7d39-4539-bef2-06f0dac24f59 lvol 150 00:14:32.970 15:00:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c16cf88d-a9c1-469e-9667-54b793dc652e 00:14:32.970 15:00:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:32.970 15:00:56 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:33.229 [2024-11-18 15:00:56.734053] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:33.229 [2024-11-18 15:00:56.734123] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:33.229 true 00:14:33.229 15:00:56 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:33.229 15:00:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:33.488 15:00:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:33.488 15:00:56 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:33.746 15:00:57 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c16cf88d-a9c1-469e-9667-54b793dc652e 00:14:34.004 15:00:57 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:34.263 15:00:57 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.522 15:00:57 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:34.522 15:00:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84391 00:14:34.522 15:00:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.522 15:00:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84391 /var/tmp/bdevperf.sock 00:14:34.522 15:00:57 -- common/autotest_common.sh@829 -- # '[' -z 84391 ']' 00:14:34.522 15:00:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.522 15:00:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.522 15:00:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.522 15:00:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.522 15:00:57 -- common/autotest_common.sh@10 -- # set +x 00:14:34.522 [2024-11-18 15:00:57.922173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:34.522 [2024-11-18 15:00:57.922905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84391 ] 00:14:34.522 [2024-11-18 15:00:58.056847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.780 [2024-11-18 15:00:58.127230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.348 15:00:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.348 15:00:58 -- common/autotest_common.sh@862 -- # return 0 00:14:35.348 15:00:58 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:35.606 Nvme0n1 00:14:35.606 15:00:59 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:35.864 [ 00:14:35.864 { 00:14:35.864 "aliases": [ 00:14:35.864 "c16cf88d-a9c1-469e-9667-54b793dc652e" 00:14:35.864 ], 00:14:35.864 "assigned_rate_limits": { 00:14:35.864 "r_mbytes_per_sec": 0, 00:14:35.864 "rw_ios_per_sec": 0, 00:14:35.864 "rw_mbytes_per_sec": 0, 00:14:35.864 "w_mbytes_per_sec": 0 00:14:35.864 }, 00:14:35.864 "block_size": 4096, 00:14:35.864 "claimed": false, 00:14:35.864 "driver_specific": { 00:14:35.864 "mp_policy": "active_passive", 00:14:35.864 "nvme": [ 00:14:35.864 { 00:14:35.864 "ctrlr_data": { 00:14:35.864 "ana_reporting": false, 00:14:35.864 "cntlid": 1, 00:14:35.864 "firmware_revision": "24.01.1", 00:14:35.864 "model_number": "SPDK bdev Controller", 00:14:35.864 "multi_ctrlr": true, 00:14:35.864 "oacs": { 00:14:35.864 "firmware": 0, 00:14:35.864 "format": 0, 00:14:35.864 "ns_manage": 0, 00:14:35.864 "security": 0 00:14:35.864 }, 00:14:35.864 "serial_number": "SPDK0", 00:14:35.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.864 "vendor_id": "0x8086" 00:14:35.864 }, 00:14:35.864 "ns_data": { 00:14:35.864 "can_share": true, 00:14:35.864 "id": 1 00:14:35.864 }, 00:14:35.864 "trid": { 00:14:35.864 "adrfam": "IPv4", 00:14:35.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.864 "traddr": "10.0.0.2", 00:14:35.864 "trsvcid": "4420", 00:14:35.864 "trtype": "TCP" 00:14:35.864 }, 00:14:35.864 "vs": { 00:14:35.864 "nvme_version": "1.3" 00:14:35.864 } 00:14:35.864 } 00:14:35.864 ] 00:14:35.864 }, 00:14:35.864 "name": "Nvme0n1", 00:14:35.864 "num_blocks": 38912, 00:14:35.864 "product_name": "NVMe disk", 00:14:35.864 "supported_io_types": { 00:14:35.864 "abort": true, 00:14:35.864 "compare": true, 00:14:35.864 "compare_and_write": true, 00:14:35.864 "flush": true, 00:14:35.864 "nvme_admin": true, 00:14:35.864 "nvme_io": true, 00:14:35.864 "read": true, 00:14:35.865 "reset": true, 00:14:35.865 "unmap": true, 00:14:35.865 "write": true, 00:14:35.865 "write_zeroes": true 00:14:35.865 }, 00:14:35.865 "uuid": "c16cf88d-a9c1-469e-9667-54b793dc652e", 00:14:35.865 "zoned": false 00:14:35.865 } 00:14:35.865 ] 00:14:35.865 15:00:59 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84440 00:14:35.865 15:00:59 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:35.865 15:00:59 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:36.123 Running I/O for 10 seconds... 00:14:37.058 Latency(us) 00:14:37.058 [2024-11-18T15:01:00.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.058 [2024-11-18T15:01:00.648Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.058 Nvme0n1 : 1.00 7780.00 30.39 0.00 0.00 0.00 0.00 0.00 00:14:37.058 [2024-11-18T15:01:00.648Z] =================================================================================================================== 00:14:37.058 [2024-11-18T15:01:00.648Z] Total : 7780.00 30.39 0.00 0.00 0.00 0.00 0.00 00:14:37.058 00:14:37.995 15:01:01 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:37.995 [2024-11-18T15:01:01.585Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.995 Nvme0n1 : 2.00 7627.00 29.79 0.00 0.00 0.00 0.00 0.00 00:14:37.995 [2024-11-18T15:01:01.585Z] =================================================================================================================== 00:14:37.995 [2024-11-18T15:01:01.585Z] Total : 7627.00 29.79 0.00 0.00 0.00 0.00 0.00 00:14:37.995 00:14:38.253 true 00:14:38.253 15:01:01 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:38.253 15:01:01 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:38.512 15:01:01 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:38.512 15:01:01 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:38.512 15:01:01 -- target/nvmf_lvs_grow.sh@65 -- # wait 84440 00:14:39.078 [2024-11-18T15:01:02.668Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.078 Nvme0n1 : 3.00 7615.00 29.75 0.00 0.00 0.00 0.00 0.00 00:14:39.078 [2024-11-18T15:01:02.668Z] =================================================================================================================== 00:14:39.078 [2024-11-18T15:01:02.668Z] Total : 7615.00 29.75 0.00 0.00 0.00 0.00 0.00 00:14:39.078 00:14:40.013 [2024-11-18T15:01:03.603Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.013 Nvme0n1 : 4.00 7587.50 29.64 0.00 0.00 0.00 0.00 0.00 00:14:40.013 [2024-11-18T15:01:03.603Z] =================================================================================================================== 00:14:40.013 [2024-11-18T15:01:03.603Z] Total : 7587.50 29.64 0.00 0.00 0.00 0.00 0.00 00:14:40.013 00:14:41.390 [2024-11-18T15:01:04.980Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.390 Nvme0n1 : 5.00 7588.00 29.64 0.00 0.00 0.00 0.00 0.00 00:14:41.390 [2024-11-18T15:01:04.980Z] =================================================================================================================== 00:14:41.390 [2024-11-18T15:01:04.980Z] Total : 7588.00 29.64 0.00 0.00 0.00 0.00 0.00 00:14:41.390 00:14:41.957 [2024-11-18T15:01:05.547Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.957 Nvme0n1 : 6.00 7435.50 29.04 0.00 0.00 0.00 0.00 0.00 00:14:41.957 [2024-11-18T15:01:05.547Z] =================================================================================================================== 00:14:41.957 [2024-11-18T15:01:05.547Z] Total : 7435.50 29.04 0.00 0.00 0.00 0.00 0.00 00:14:41.957 00:14:43.337 [2024-11-18T15:01:06.927Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.337 Nvme0n1 : 7.00 7434.43 29.04 0.00 0.00 0.00 0.00 0.00 00:14:43.337 [2024-11-18T15:01:06.927Z] =================================================================================================================== 00:14:43.337 [2024-11-18T15:01:06.927Z] Total : 7434.43 29.04 0.00 0.00 0.00 0.00 0.00 00:14:43.337 00:14:44.273 [2024-11-18T15:01:07.863Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.273 Nvme0n1 : 8.00 7369.75 28.79 0.00 0.00 0.00 0.00 0.00 00:14:44.273 [2024-11-18T15:01:07.863Z] =================================================================================================================== 00:14:44.273 [2024-11-18T15:01:07.863Z] Total : 7369.75 28.79 0.00 0.00 0.00 0.00 0.00 00:14:44.273 00:14:45.210 [2024-11-18T15:01:08.800Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.210 Nvme0n1 : 9.00 7346.44 28.70 0.00 0.00 0.00 0.00 0.00 00:14:45.210 [2024-11-18T15:01:08.800Z] =================================================================================================================== 00:14:45.211 [2024-11-18T15:01:08.801Z] Total : 7346.44 28.70 0.00 0.00 0.00 0.00 0.00 00:14:45.211 00:14:46.149 [2024-11-18T15:01:09.739Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.149 Nvme0n1 : 10.00 7332.60 28.64 0.00 0.00 0.00 0.00 0.00 00:14:46.149 [2024-11-18T15:01:09.739Z] =================================================================================================================== 00:14:46.149 [2024-11-18T15:01:09.739Z] Total : 7332.60 28.64 0.00 0.00 0.00 0.00 0.00 00:14:46.149 00:14:46.149 00:14:46.149 Latency(us) 00:14:46.149 [2024-11-18T15:01:09.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.149 [2024-11-18T15:01:09.739Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.149 Nvme0n1 : 10.00 7343.36 28.68 0.00 0.00 17426.94 5510.98 152520.15 00:14:46.149 [2024-11-18T15:01:09.739Z] =================================================================================================================== 00:14:46.149 [2024-11-18T15:01:09.739Z] Total : 7343.36 28.68 0.00 0.00 17426.94 5510.98 152520.15 00:14:46.149 0 00:14:46.149 15:01:09 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84391 00:14:46.150 15:01:09 -- common/autotest_common.sh@936 -- # '[' -z 84391 ']' 00:14:46.150 15:01:09 -- common/autotest_common.sh@940 -- # kill -0 84391 00:14:46.150 15:01:09 -- common/autotest_common.sh@941 -- # uname 00:14:46.150 15:01:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.150 15:01:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84391 00:14:46.150 killing process with pid 84391 00:14:46.150 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.150 00:14:46.150 Latency(us) 00:14:46.150 [2024-11-18T15:01:09.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.150 [2024-11-18T15:01:09.740Z] =================================================================================================================== 00:14:46.150 [2024-11-18T15:01:09.740Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.150 15:01:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:46.150 15:01:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:46.150 15:01:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84391' 00:14:46.150 15:01:09 -- common/autotest_common.sh@955 -- # kill 84391 00:14:46.150 15:01:09 -- common/autotest_common.sh@960 -- # wait 84391 00:14:46.409 15:01:09 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.667 15:01:10 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:46.667 15:01:10 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:46.926 15:01:10 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:46.926 15:01:10 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:46.926 15:01:10 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83798 00:14:46.926 15:01:10 -- target/nvmf_lvs_grow.sh@74 -- # wait 83798 00:14:46.926 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83798 Killed "${NVMF_APP[@]}" "$@" 00:14:46.926 15:01:10 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:46.926 15:01:10 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:46.926 15:01:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:46.926 15:01:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:46.926 15:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:46.926 15:01:10 -- nvmf/common.sh@469 -- # nvmfpid=84595 00:14:46.926 15:01:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:46.926 15:01:10 -- nvmf/common.sh@470 -- # waitforlisten 84595 00:14:46.926 15:01:10 -- common/autotest_common.sh@829 -- # '[' -z 84595 ']' 00:14:46.926 15:01:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.926 15:01:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.926 15:01:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.926 15:01:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.926 15:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:46.926 [2024-11-18 15:01:10.485623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:46.926 [2024-11-18 15:01:10.485722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.185 [2024-11-18 15:01:10.628648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.185 [2024-11-18 15:01:10.684348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:47.185 [2024-11-18 15:01:10.684482] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.185 [2024-11-18 15:01:10.684495] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.185 [2024-11-18 15:01:10.684503] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.185 [2024-11-18 15:01:10.684532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.122 15:01:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.122 15:01:11 -- common/autotest_common.sh@862 -- # return 0 00:14:48.122 15:01:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:48.122 15:01:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.122 15:01:11 -- common/autotest_common.sh@10 -- # set +x 00:14:48.122 15:01:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.122 15:01:11 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:48.122 [2024-11-18 15:01:11.690574] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:48.122 [2024-11-18 15:01:11.690865] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:48.122 [2024-11-18 15:01:11.691071] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:48.380 15:01:11 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:48.380 15:01:11 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c16cf88d-a9c1-469e-9667-54b793dc652e 00:14:48.380 15:01:11 -- common/autotest_common.sh@897 -- # local bdev_name=c16cf88d-a9c1-469e-9667-54b793dc652e 00:14:48.380 15:01:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.380 15:01:11 -- common/autotest_common.sh@899 -- # local i 00:14:48.380 15:01:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.380 15:01:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.380 15:01:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:48.381 15:01:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c16cf88d-a9c1-469e-9667-54b793dc652e -t 2000 00:14:48.639 [ 00:14:48.639 { 00:14:48.639 "aliases": [ 00:14:48.639 "lvs/lvol" 00:14:48.639 ], 00:14:48.639 "assigned_rate_limits": { 00:14:48.639 "r_mbytes_per_sec": 0, 00:14:48.639 "rw_ios_per_sec": 0, 00:14:48.639 "rw_mbytes_per_sec": 0, 00:14:48.639 "w_mbytes_per_sec": 0 00:14:48.639 }, 00:14:48.639 "block_size": 4096, 00:14:48.639 "claimed": false, 00:14:48.639 "driver_specific": { 00:14:48.639 "lvol": { 00:14:48.639 "base_bdev": "aio_bdev", 00:14:48.639 "clone": false, 00:14:48.639 "esnap_clone": false, 00:14:48.639 "lvol_store_uuid": "4b322b97-7d39-4539-bef2-06f0dac24f59", 00:14:48.639 "snapshot": false, 00:14:48.639 "thin_provision": false 00:14:48.639 } 00:14:48.639 }, 00:14:48.639 "name": "c16cf88d-a9c1-469e-9667-54b793dc652e", 00:14:48.639 "num_blocks": 38912, 00:14:48.639 "product_name": "Logical Volume", 00:14:48.639 "supported_io_types": { 00:14:48.639 "abort": false, 00:14:48.639 "compare": false, 00:14:48.639 "compare_and_write": false, 00:14:48.639 "flush": false, 00:14:48.639 "nvme_admin": false, 00:14:48.639 "nvme_io": false, 00:14:48.639 "read": true, 00:14:48.639 "reset": true, 00:14:48.639 "unmap": true, 00:14:48.639 "write": true, 00:14:48.639 "write_zeroes": true 00:14:48.639 }, 00:14:48.639 "uuid": "c16cf88d-a9c1-469e-9667-54b793dc652e", 00:14:48.639 "zoned": false 00:14:48.639 } 00:14:48.639 ] 00:14:48.639 15:01:12 -- common/autotest_common.sh@905 -- # return 0 00:14:48.639 15:01:12 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:48.639 15:01:12 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:48.898 15:01:12 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:48.898 15:01:12 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:48.898 15:01:12 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:49.156 15:01:12 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:49.156 15:01:12 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:49.415 [2024-11-18 15:01:12.788371] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:49.415 15:01:12 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:49.415 15:01:12 -- common/autotest_common.sh@650 -- # local es=0 00:14:49.415 15:01:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:49.415 15:01:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.415 15:01:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.415 15:01:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.415 15:01:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.415 15:01:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.415 15:01:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.415 15:01:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.415 15:01:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:49.415 15:01:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:49.674 2024/11/18 15:01:13 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4b322b97-7d39-4539-bef2-06f0dac24f59], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:49.674 request: 00:14:49.674 { 00:14:49.674 "method": "bdev_lvol_get_lvstores", 00:14:49.674 "params": { 00:14:49.674 "uuid": "4b322b97-7d39-4539-bef2-06f0dac24f59" 00:14:49.674 } 00:14:49.674 } 00:14:49.674 Got JSON-RPC error response 00:14:49.674 GoRPCClient: error on JSON-RPC call 00:14:49.674 15:01:13 -- common/autotest_common.sh@653 -- # es=1 00:14:49.674 15:01:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:49.674 15:01:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:49.674 15:01:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:49.674 15:01:13 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:49.674 aio_bdev 00:14:49.674 15:01:13 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c16cf88d-a9c1-469e-9667-54b793dc652e 00:14:49.674 15:01:13 -- common/autotest_common.sh@897 -- # local bdev_name=c16cf88d-a9c1-469e-9667-54b793dc652e 00:14:49.674 15:01:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.674 15:01:13 -- common/autotest_common.sh@899 -- # local i 00:14:49.674 15:01:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.674 15:01:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.674 15:01:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:49.933 15:01:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c16cf88d-a9c1-469e-9667-54b793dc652e -t 2000 00:14:50.192 [ 00:14:50.192 { 00:14:50.192 "aliases": [ 00:14:50.192 "lvs/lvol" 00:14:50.192 ], 00:14:50.192 "assigned_rate_limits": { 00:14:50.192 "r_mbytes_per_sec": 0, 00:14:50.192 "rw_ios_per_sec": 0, 00:14:50.192 "rw_mbytes_per_sec": 0, 00:14:50.192 "w_mbytes_per_sec": 0 00:14:50.192 }, 00:14:50.192 "block_size": 4096, 00:14:50.192 "claimed": false, 00:14:50.192 "driver_specific": { 00:14:50.192 "lvol": { 00:14:50.192 "base_bdev": "aio_bdev", 00:14:50.192 "clone": false, 00:14:50.192 "esnap_clone": false, 00:14:50.192 "lvol_store_uuid": "4b322b97-7d39-4539-bef2-06f0dac24f59", 00:14:50.192 "snapshot": false, 00:14:50.192 "thin_provision": false 00:14:50.192 } 00:14:50.192 }, 00:14:50.192 "name": "c16cf88d-a9c1-469e-9667-54b793dc652e", 00:14:50.192 "num_blocks": 38912, 00:14:50.192 "product_name": "Logical Volume", 00:14:50.192 "supported_io_types": { 00:14:50.192 "abort": false, 00:14:50.192 "compare": false, 00:14:50.192 "compare_and_write": false, 00:14:50.192 "flush": false, 00:14:50.192 "nvme_admin": false, 00:14:50.192 "nvme_io": false, 00:14:50.192 "read": true, 00:14:50.192 "reset": true, 00:14:50.192 "unmap": true, 00:14:50.192 "write": true, 00:14:50.192 "write_zeroes": true 00:14:50.192 }, 00:14:50.192 "uuid": "c16cf88d-a9c1-469e-9667-54b793dc652e", 00:14:50.192 "zoned": false 00:14:50.192 } 00:14:50.192 ] 00:14:50.192 15:01:13 -- common/autotest_common.sh@905 -- # return 0 00:14:50.192 15:01:13 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:50.192 15:01:13 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:50.451 15:01:13 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:50.451 15:01:13 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:50.451 15:01:13 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:50.710 15:01:14 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:50.710 15:01:14 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c16cf88d-a9c1-469e-9667-54b793dc652e 00:14:50.968 15:01:14 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b322b97-7d39-4539-bef2-06f0dac24f59 00:14:51.227 15:01:14 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:51.486 15:01:14 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:51.745 00:14:51.745 real 0m19.639s 00:14:51.745 user 0m38.354s 00:14:51.745 sys 0m10.041s 00:14:51.745 15:01:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.745 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:14:51.745 ************************************ 00:14:51.745 END TEST lvs_grow_dirty 00:14:51.745 ************************************ 00:14:51.745 15:01:15 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:51.745 15:01:15 -- common/autotest_common.sh@806 -- # type=--id 00:14:51.745 15:01:15 -- common/autotest_common.sh@807 -- # id=0 00:14:51.745 15:01:15 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:51.745 15:01:15 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:51.745 15:01:15 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:51.745 15:01:15 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:51.745 15:01:15 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:51.745 15:01:15 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:51.745 nvmf_trace.0 00:14:51.745 15:01:15 -- common/autotest_common.sh@821 -- # return 0 00:14:51.745 15:01:15 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:51.745 15:01:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.745 15:01:15 -- nvmf/common.sh@116 -- # sync 00:14:52.312 15:01:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:52.312 15:01:15 -- nvmf/common.sh@119 -- # set +e 00:14:52.312 15:01:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.312 15:01:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:52.312 rmmod nvme_tcp 00:14:52.312 rmmod nvme_fabrics 00:14:52.312 rmmod nvme_keyring 00:14:52.571 15:01:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.571 15:01:15 -- nvmf/common.sh@123 -- # set -e 00:14:52.571 15:01:15 -- nvmf/common.sh@124 -- # return 0 00:14:52.571 15:01:15 -- nvmf/common.sh@477 -- # '[' -n 84595 ']' 00:14:52.571 15:01:15 -- nvmf/common.sh@478 -- # killprocess 84595 00:14:52.571 15:01:15 -- common/autotest_common.sh@936 -- # '[' -z 84595 ']' 00:14:52.571 15:01:15 -- common/autotest_common.sh@940 -- # kill -0 84595 00:14:52.571 15:01:15 -- common/autotest_common.sh@941 -- # uname 00:14:52.571 15:01:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.571 15:01:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84595 00:14:52.571 15:01:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:52.571 15:01:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:52.571 killing process with pid 84595 00:14:52.571 15:01:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84595' 00:14:52.571 15:01:15 -- common/autotest_common.sh@955 -- # kill 84595 00:14:52.571 15:01:15 -- common/autotest_common.sh@960 -- # wait 84595 00:14:52.571 15:01:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.571 15:01:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:52.571 15:01:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:52.571 15:01:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.571 15:01:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:52.571 15:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.571 15:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.571 15:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.830 15:01:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:52.830 00:14:52.830 real 0m40.302s 00:14:52.830 user 1m1.791s 00:14:52.830 sys 0m13.388s 00:14:52.830 15:01:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.830 ************************************ 00:14:52.830 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:52.830 END TEST nvmf_lvs_grow 00:14:52.830 ************************************ 00:14:52.830 15:01:16 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:52.830 15:01:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:52.830 15:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.830 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:52.830 ************************************ 00:14:52.830 START TEST nvmf_bdev_io_wait 00:14:52.830 ************************************ 00:14:52.830 15:01:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:52.830 * Looking for test storage... 00:14:52.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.830 15:01:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:52.830 15:01:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:52.830 15:01:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:52.830 15:01:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:52.830 15:01:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:52.830 15:01:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:52.830 15:01:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:52.830 15:01:16 -- scripts/common.sh@335 -- # IFS=.-: 00:14:52.831 15:01:16 -- scripts/common.sh@335 -- # read -ra ver1 00:14:52.831 15:01:16 -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.831 15:01:16 -- scripts/common.sh@336 -- # read -ra ver2 00:14:52.831 15:01:16 -- scripts/common.sh@337 -- # local 'op=<' 00:14:52.831 15:01:16 -- scripts/common.sh@339 -- # ver1_l=2 00:14:52.831 15:01:16 -- scripts/common.sh@340 -- # ver2_l=1 00:14:52.831 15:01:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:52.831 15:01:16 -- scripts/common.sh@343 -- # case "$op" in 00:14:52.831 15:01:16 -- scripts/common.sh@344 -- # : 1 00:14:52.831 15:01:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:52.831 15:01:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.831 15:01:16 -- scripts/common.sh@364 -- # decimal 1 00:14:52.831 15:01:16 -- scripts/common.sh@352 -- # local d=1 00:14:52.831 15:01:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.831 15:01:16 -- scripts/common.sh@354 -- # echo 1 00:14:52.831 15:01:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:52.831 15:01:16 -- scripts/common.sh@365 -- # decimal 2 00:14:52.831 15:01:16 -- scripts/common.sh@352 -- # local d=2 00:14:52.831 15:01:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.831 15:01:16 -- scripts/common.sh@354 -- # echo 2 00:14:52.831 15:01:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:52.831 15:01:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:52.831 15:01:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:52.831 15:01:16 -- scripts/common.sh@367 -- # return 0 00:14:52.831 15:01:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.831 15:01:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:52.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.831 --rc genhtml_branch_coverage=1 00:14:52.831 --rc genhtml_function_coverage=1 00:14:52.831 --rc genhtml_legend=1 00:14:52.831 --rc geninfo_all_blocks=1 00:14:52.831 --rc geninfo_unexecuted_blocks=1 00:14:52.831 00:14:52.831 ' 00:14:52.831 15:01:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:52.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.831 --rc genhtml_branch_coverage=1 00:14:52.831 --rc genhtml_function_coverage=1 00:14:52.831 --rc genhtml_legend=1 00:14:52.831 --rc geninfo_all_blocks=1 00:14:52.831 --rc geninfo_unexecuted_blocks=1 00:14:52.831 00:14:52.831 ' 00:14:52.831 15:01:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:52.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.831 --rc genhtml_branch_coverage=1 00:14:52.831 --rc genhtml_function_coverage=1 00:14:52.831 --rc genhtml_legend=1 00:14:52.831 --rc geninfo_all_blocks=1 00:14:52.831 --rc geninfo_unexecuted_blocks=1 00:14:52.831 00:14:52.831 ' 00:14:52.831 15:01:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:52.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.831 --rc genhtml_branch_coverage=1 00:14:52.831 --rc genhtml_function_coverage=1 00:14:52.831 --rc genhtml_legend=1 00:14:52.831 --rc geninfo_all_blocks=1 00:14:52.831 --rc geninfo_unexecuted_blocks=1 00:14:52.831 00:14:52.831 ' 00:14:52.831 15:01:16 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.831 15:01:16 -- nvmf/common.sh@7 -- # uname -s 00:14:53.090 15:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.090 15:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.090 15:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.090 15:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.090 15:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.090 15:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.090 15:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.090 15:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.090 15:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.090 15:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.090 15:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:14:53.090 15:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:14:53.090 15:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.090 15:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.090 15:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.090 15:01:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.090 15:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.090 15:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.090 15:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.090 15:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.090 15:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.090 15:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.090 15:01:16 -- paths/export.sh@5 -- # export PATH 00:14:53.090 15:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.090 15:01:16 -- nvmf/common.sh@46 -- # : 0 00:14:53.090 15:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:53.090 15:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:53.090 15:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:53.090 15:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.090 15:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.090 15:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:53.090 15:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:53.090 15:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:53.090 15:01:16 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:53.090 15:01:16 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:53.090 15:01:16 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:53.090 15:01:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:53.090 15:01:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.090 15:01:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:53.090 15:01:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:53.090 15:01:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:53.090 15:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.090 15:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.091 15:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.091 15:01:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:53.091 15:01:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:53.091 15:01:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:53.091 15:01:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:53.091 15:01:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:53.091 15:01:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:53.091 15:01:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.091 15:01:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.091 15:01:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:53.091 15:01:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:53.091 15:01:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.091 15:01:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.091 15:01:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.091 15:01:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.091 15:01:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.091 15:01:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.091 15:01:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.091 15:01:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.091 15:01:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:53.091 15:01:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:53.091 Cannot find device "nvmf_tgt_br" 00:14:53.091 15:01:16 -- nvmf/common.sh@154 -- # true 00:14:53.091 15:01:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.091 Cannot find device "nvmf_tgt_br2" 00:14:53.091 15:01:16 -- nvmf/common.sh@155 -- # true 00:14:53.091 15:01:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:53.091 15:01:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:53.091 Cannot find device "nvmf_tgt_br" 00:14:53.091 15:01:16 -- nvmf/common.sh@157 -- # true 00:14:53.091 15:01:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:53.091 Cannot find device "nvmf_tgt_br2" 00:14:53.091 15:01:16 -- nvmf/common.sh@158 -- # true 00:14:53.091 15:01:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:53.091 15:01:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:53.091 15:01:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.091 15:01:16 -- nvmf/common.sh@161 -- # true 00:14:53.091 15:01:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.091 15:01:16 -- nvmf/common.sh@162 -- # true 00:14:53.091 15:01:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.091 15:01:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.091 15:01:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.091 15:01:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.091 15:01:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.091 15:01:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.091 15:01:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.091 15:01:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:53.091 15:01:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:53.091 15:01:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:53.091 15:01:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:53.091 15:01:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:53.091 15:01:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:53.091 15:01:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.091 15:01:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.091 15:01:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.091 15:01:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:53.350 15:01:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:53.350 15:01:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.350 15:01:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.350 15:01:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.350 15:01:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.350 15:01:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.350 15:01:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:53.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:53.350 00:14:53.350 --- 10.0.0.2 ping statistics --- 00:14:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.350 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:53.350 15:01:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:53.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:53.350 00:14:53.350 --- 10.0.0.3 ping statistics --- 00:14:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.350 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:53.350 15:01:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:53.350 00:14:53.350 --- 10.0.0.1 ping statistics --- 00:14:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.350 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:53.350 15:01:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.350 15:01:16 -- nvmf/common.sh@421 -- # return 0 00:14:53.350 15:01:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:53.350 15:01:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.350 15:01:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:53.350 15:01:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:53.350 15:01:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.350 15:01:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:53.350 15:01:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:53.350 15:01:16 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:53.350 15:01:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:53.350 15:01:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.350 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:53.350 15:01:16 -- nvmf/common.sh@469 -- # nvmfpid=85011 00:14:53.350 15:01:16 -- nvmf/common.sh@470 -- # waitforlisten 85011 00:14:53.350 15:01:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:53.350 15:01:16 -- common/autotest_common.sh@829 -- # '[' -z 85011 ']' 00:14:53.350 15:01:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.350 15:01:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.350 15:01:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.350 15:01:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.350 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:53.350 [2024-11-18 15:01:16.823038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.350 [2024-11-18 15:01:16.823130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.608 [2024-11-18 15:01:16.962170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.608 [2024-11-18 15:01:17.021925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.608 [2024-11-18 15:01:17.022099] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.608 [2024-11-18 15:01:17.022111] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.608 [2024-11-18 15:01:17.022119] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.608 [2024-11-18 15:01:17.022282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.608 [2024-11-18 15:01:17.022442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.608 [2024-11-18 15:01:17.023787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.608 [2024-11-18 15:01:17.023850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.608 15:01:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.608 15:01:17 -- common/autotest_common.sh@862 -- # return 0 00:14:53.608 15:01:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:53.608 15:01:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.608 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:53.608 15:01:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.608 15:01:17 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:53.608 15:01:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.608 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:53.608 15:01:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.608 15:01:17 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:53.608 15:01:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.608 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:53.608 15:01:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.608 15:01:17 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.608 15:01:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.608 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:53.868 [2024-11-18 15:01:17.197385] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.868 15:01:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.868 15:01:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.868 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:53.868 Malloc0 00:14:53.868 15:01:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.868 15:01:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.868 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:53.868 15:01:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.868 15:01:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.868 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:53.868 15:01:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.868 15:01:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.868 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:53.868 [2024-11-18 15:01:17.252459] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.868 15:01:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=85052 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@30 -- # READ_PID=85054 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=85056 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:53.868 15:01:17 -- nvmf/common.sh@520 -- # config=() 00:14:53.868 15:01:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:53.868 15:01:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:53.868 15:01:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:53.868 { 00:14:53.868 "params": { 00:14:53.868 "name": "Nvme$subsystem", 00:14:53.868 "trtype": "$TEST_TRANSPORT", 00:14:53.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.868 "adrfam": "ipv4", 00:14:53.868 "trsvcid": "$NVMF_PORT", 00:14:53.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.868 "hdgst": ${hdgst:-false}, 00:14:53.868 "ddgst": ${ddgst:-false} 00:14:53.868 }, 00:14:53.868 "method": "bdev_nvme_attach_controller" 00:14:53.868 } 00:14:53.868 EOF 00:14:53.868 )") 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:53.868 15:01:17 -- nvmf/common.sh@520 -- # config=() 00:14:53.868 15:01:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:53.868 15:01:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:53.868 15:01:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:53.868 { 00:14:53.868 "params": { 00:14:53.868 "name": "Nvme$subsystem", 00:14:53.868 "trtype": "$TEST_TRANSPORT", 00:14:53.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.868 "adrfam": "ipv4", 00:14:53.868 "trsvcid": "$NVMF_PORT", 00:14:53.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.868 "hdgst": ${hdgst:-false}, 00:14:53.868 "ddgst": ${ddgst:-false} 00:14:53.868 }, 00:14:53.868 "method": "bdev_nvme_attach_controller" 00:14:53.868 } 00:14:53.868 EOF 00:14:53.868 )") 00:14:53.868 15:01:17 -- nvmf/common.sh@542 -- # cat 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:53.868 15:01:17 -- nvmf/common.sh@520 -- # config=() 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=85059 00:14:53.868 15:01:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:53.868 15:01:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@35 -- # sync 00:14:53.868 15:01:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:53.868 { 00:14:53.868 "params": { 00:14:53.868 "name": "Nvme$subsystem", 00:14:53.868 "trtype": "$TEST_TRANSPORT", 00:14:53.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.868 "adrfam": "ipv4", 00:14:53.868 "trsvcid": "$NVMF_PORT", 00:14:53.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.868 "hdgst": ${hdgst:-false}, 00:14:53.868 "ddgst": ${ddgst:-false} 00:14:53.868 }, 00:14:53.868 "method": "bdev_nvme_attach_controller" 00:14:53.868 } 00:14:53.868 EOF 00:14:53.868 )") 00:14:53.868 15:01:17 -- nvmf/common.sh@542 -- # cat 00:14:53.868 15:01:17 -- nvmf/common.sh@542 -- # cat 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:53.868 15:01:17 -- nvmf/common.sh@520 -- # config=() 00:14:53.868 15:01:17 -- nvmf/common.sh@520 -- # local subsystem config 00:14:53.868 15:01:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:53.868 15:01:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:53.868 { 00:14:53.868 "params": { 00:14:53.868 "name": "Nvme$subsystem", 00:14:53.868 "trtype": "$TEST_TRANSPORT", 00:14:53.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.868 "adrfam": "ipv4", 00:14:53.868 "trsvcid": "$NVMF_PORT", 00:14:53.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.868 "hdgst": ${hdgst:-false}, 00:14:53.868 "ddgst": ${ddgst:-false} 00:14:53.868 }, 00:14:53.868 "method": "bdev_nvme_attach_controller" 00:14:53.868 } 00:14:53.868 EOF 00:14:53.868 )") 00:14:53.868 15:01:17 -- nvmf/common.sh@542 -- # cat 00:14:53.868 15:01:17 -- nvmf/common.sh@544 -- # jq . 00:14:53.868 15:01:17 -- nvmf/common.sh@544 -- # jq . 00:14:53.868 15:01:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:53.868 15:01:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:53.868 "params": { 00:14:53.868 "name": "Nvme1", 00:14:53.868 "trtype": "tcp", 00:14:53.868 "traddr": "10.0.0.2", 00:14:53.868 "adrfam": "ipv4", 00:14:53.868 "trsvcid": "4420", 00:14:53.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.868 "hdgst": false, 00:14:53.868 "ddgst": false 00:14:53.868 }, 00:14:53.868 "method": "bdev_nvme_attach_controller" 00:14:53.868 }' 00:14:53.868 15:01:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:53.868 15:01:17 -- nvmf/common.sh@544 -- # jq . 00:14:53.868 15:01:17 -- nvmf/common.sh@544 -- # jq . 00:14:53.868 15:01:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:53.868 15:01:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:53.868 "params": { 00:14:53.868 "name": "Nvme1", 00:14:53.868 "trtype": "tcp", 00:14:53.868 "traddr": "10.0.0.2", 00:14:53.868 "adrfam": "ipv4", 00:14:53.868 "trsvcid": "4420", 00:14:53.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.868 "hdgst": false, 00:14:53.868 "ddgst": false 00:14:53.868 }, 00:14:53.868 "method": "bdev_nvme_attach_controller" 00:14:53.868 }' 00:14:53.868 15:01:17 -- nvmf/common.sh@545 -- # IFS=, 00:14:53.868 15:01:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:53.868 "params": { 00:14:53.868 "name": "Nvme1", 00:14:53.868 "trtype": "tcp", 00:14:53.868 "traddr": "10.0.0.2", 00:14:53.868 "adrfam": "ipv4", 00:14:53.868 "trsvcid": "4420", 00:14:53.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.868 "hdgst": false, 00:14:53.868 "ddgst": false 00:14:53.868 }, 00:14:53.868 "method": "bdev_nvme_attach_controller" 00:14:53.868 }' 00:14:53.868 15:01:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:53.868 "params": { 00:14:53.868 "name": "Nvme1", 00:14:53.868 "trtype": "tcp", 00:14:53.868 "traddr": "10.0.0.2", 00:14:53.868 "adrfam": "ipv4", 00:14:53.868 "trsvcid": "4420", 00:14:53.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.868 "hdgst": false, 00:14:53.868 "ddgst": false 00:14:53.868 }, 00:14:53.868 "method": "bdev_nvme_attach_controller" 00:14:53.868 }' 00:14:53.868 [2024-11-18 15:01:17.308911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.868 [2024-11-18 15:01:17.309000] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:53.868 [2024-11-18 15:01:17.314535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.868 [2024-11-18 15:01:17.314612] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:53.868 15:01:17 -- target/bdev_io_wait.sh@37 -- # wait 85052 00:14:53.868 [2024-11-18 15:01:17.337509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.869 [2024-11-18 15:01:17.337590] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:53.869 [2024-11-18 15:01:17.339613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.869 [2024-11-18 15:01:17.339698] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:54.127 [2024-11-18 15:01:17.520973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.127 [2024-11-18 15:01:17.594591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.127 [2024-11-18 15:01:17.616784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:54.128 [2024-11-18 15:01:17.664776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:54.128 [2024-11-18 15:01:17.681825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.387 [2024-11-18 15:01:17.757703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:54.387 [2024-11-18 15:01:17.761247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.387 Running I/O for 1 seconds... 00:14:54.387 Running I/O for 1 seconds... 00:14:54.387 [2024-11-18 15:01:17.837432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:54.387 Running I/O for 1 seconds... 00:14:54.387 Running I/O for 1 seconds... 00:14:55.323 00:14:55.323 Latency(us) 00:14:55.323 [2024-11-18T15:01:18.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.323 [2024-11-18T15:01:18.913Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:55.323 Nvme1n1 : 1.00 223101.10 871.49 0.00 0.00 571.43 218.76 968.15 00:14:55.323 [2024-11-18T15:01:18.913Z] =================================================================================================================== 00:14:55.323 [2024-11-18T15:01:18.913Z] Total : 223101.10 871.49 0.00 0.00 571.43 218.76 968.15 00:14:55.323 00:14:55.323 Latency(us) 00:14:55.323 [2024-11-18T15:01:18.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.323 [2024-11-18T15:01:18.913Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:55.323 Nvme1n1 : 1.02 4836.52 18.89 0.00 0.00 25991.71 8936.73 51952.17 00:14:55.323 [2024-11-18T15:01:18.913Z] =================================================================================================================== 00:14:55.323 [2024-11-18T15:01:18.913Z] Total : 4836.52 18.89 0.00 0.00 25991.71 8936.73 51952.17 00:14:55.583 00:14:55.583 Latency(us) 00:14:55.583 [2024-11-18T15:01:19.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.583 [2024-11-18T15:01:19.173Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:55.583 Nvme1n1 : 1.01 4761.75 18.60 0.00 0.00 26779.08 6940.86 57433.37 00:14:55.583 [2024-11-18T15:01:19.173Z] =================================================================================================================== 00:14:55.583 [2024-11-18T15:01:19.173Z] Total : 4761.75 18.60 0.00 0.00 26779.08 6940.86 57433.37 00:14:55.583 00:14:55.583 Latency(us) 00:14:55.583 [2024-11-18T15:01:19.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.583 [2024-11-18T15:01:19.173Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:55.583 Nvme1n1 : 1.01 6929.07 27.07 0.00 0.00 18396.86 7328.12 31218.97 00:14:55.583 [2024-11-18T15:01:19.173Z] =================================================================================================================== 00:14:55.583 [2024-11-18T15:01:19.173Z] Total : 6929.07 27.07 0.00 0.00 18396.86 7328.12 31218.97 00:14:55.841 15:01:19 -- target/bdev_io_wait.sh@38 -- # wait 85054 00:14:55.841 15:01:19 -- target/bdev_io_wait.sh@39 -- # wait 85056 00:14:55.841 15:01:19 -- target/bdev_io_wait.sh@40 -- # wait 85059 00:14:55.841 15:01:19 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.841 15:01:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.841 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:14:55.841 15:01:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.841 15:01:19 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:55.841 15:01:19 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:55.841 15:01:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:55.841 15:01:19 -- nvmf/common.sh@116 -- # sync 00:14:55.841 15:01:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:55.841 15:01:19 -- nvmf/common.sh@119 -- # set +e 00:14:55.841 15:01:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:55.841 15:01:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:55.841 rmmod nvme_tcp 00:14:56.102 rmmod nvme_fabrics 00:14:56.102 rmmod nvme_keyring 00:14:56.102 15:01:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:56.102 15:01:19 -- nvmf/common.sh@123 -- # set -e 00:14:56.102 15:01:19 -- nvmf/common.sh@124 -- # return 0 00:14:56.103 15:01:19 -- nvmf/common.sh@477 -- # '[' -n 85011 ']' 00:14:56.103 15:01:19 -- nvmf/common.sh@478 -- # killprocess 85011 00:14:56.103 15:01:19 -- common/autotest_common.sh@936 -- # '[' -z 85011 ']' 00:14:56.103 15:01:19 -- common/autotest_common.sh@940 -- # kill -0 85011 00:14:56.103 15:01:19 -- common/autotest_common.sh@941 -- # uname 00:14:56.103 15:01:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.103 15:01:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85011 00:14:56.103 killing process with pid 85011 00:14:56.103 15:01:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:56.103 15:01:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:56.103 15:01:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85011' 00:14:56.103 15:01:19 -- common/autotest_common.sh@955 -- # kill 85011 00:14:56.103 15:01:19 -- common/autotest_common.sh@960 -- # wait 85011 00:14:56.103 15:01:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:56.103 15:01:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:56.103 15:01:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:56.103 15:01:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.103 15:01:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:56.103 15:01:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.103 15:01:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.103 15:01:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.376 15:01:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:56.376 00:14:56.376 real 0m3.487s 00:14:56.376 user 0m15.962s 00:14:56.376 sys 0m1.813s 00:14:56.376 15:01:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:56.376 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:14:56.376 ************************************ 00:14:56.376 END TEST nvmf_bdev_io_wait 00:14:56.376 ************************************ 00:14:56.376 15:01:19 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:56.376 15:01:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:56.376 15:01:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.376 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:14:56.376 ************************************ 00:14:56.376 START TEST nvmf_queue_depth 00:14:56.376 ************************************ 00:14:56.376 15:01:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:56.376 * Looking for test storage... 00:14:56.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.376 15:01:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:56.376 15:01:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:56.376 15:01:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:56.376 15:01:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:56.376 15:01:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:56.376 15:01:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:56.376 15:01:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:56.376 15:01:19 -- scripts/common.sh@335 -- # IFS=.-: 00:14:56.376 15:01:19 -- scripts/common.sh@335 -- # read -ra ver1 00:14:56.376 15:01:19 -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.376 15:01:19 -- scripts/common.sh@336 -- # read -ra ver2 00:14:56.376 15:01:19 -- scripts/common.sh@337 -- # local 'op=<' 00:14:56.376 15:01:19 -- scripts/common.sh@339 -- # ver1_l=2 00:14:56.376 15:01:19 -- scripts/common.sh@340 -- # ver2_l=1 00:14:56.376 15:01:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:56.376 15:01:19 -- scripts/common.sh@343 -- # case "$op" in 00:14:56.376 15:01:19 -- scripts/common.sh@344 -- # : 1 00:14:56.376 15:01:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:56.376 15:01:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.376 15:01:19 -- scripts/common.sh@364 -- # decimal 1 00:14:56.376 15:01:19 -- scripts/common.sh@352 -- # local d=1 00:14:56.376 15:01:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.376 15:01:19 -- scripts/common.sh@354 -- # echo 1 00:14:56.376 15:01:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:56.376 15:01:19 -- scripts/common.sh@365 -- # decimal 2 00:14:56.376 15:01:19 -- scripts/common.sh@352 -- # local d=2 00:14:56.376 15:01:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.376 15:01:19 -- scripts/common.sh@354 -- # echo 2 00:14:56.376 15:01:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:56.376 15:01:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:56.377 15:01:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:56.377 15:01:19 -- scripts/common.sh@367 -- # return 0 00:14:56.377 15:01:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.377 15:01:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:56.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.377 --rc genhtml_branch_coverage=1 00:14:56.377 --rc genhtml_function_coverage=1 00:14:56.377 --rc genhtml_legend=1 00:14:56.377 --rc geninfo_all_blocks=1 00:14:56.377 --rc geninfo_unexecuted_blocks=1 00:14:56.377 00:14:56.377 ' 00:14:56.377 15:01:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:56.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.377 --rc genhtml_branch_coverage=1 00:14:56.377 --rc genhtml_function_coverage=1 00:14:56.377 --rc genhtml_legend=1 00:14:56.377 --rc geninfo_all_blocks=1 00:14:56.377 --rc geninfo_unexecuted_blocks=1 00:14:56.377 00:14:56.377 ' 00:14:56.377 15:01:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:56.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.377 --rc genhtml_branch_coverage=1 00:14:56.377 --rc genhtml_function_coverage=1 00:14:56.377 --rc genhtml_legend=1 00:14:56.377 --rc geninfo_all_blocks=1 00:14:56.377 --rc geninfo_unexecuted_blocks=1 00:14:56.377 00:14:56.377 ' 00:14:56.377 15:01:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:56.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.377 --rc genhtml_branch_coverage=1 00:14:56.377 --rc genhtml_function_coverage=1 00:14:56.377 --rc genhtml_legend=1 00:14:56.377 --rc geninfo_all_blocks=1 00:14:56.377 --rc geninfo_unexecuted_blocks=1 00:14:56.377 00:14:56.377 ' 00:14:56.377 15:01:19 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.377 15:01:19 -- nvmf/common.sh@7 -- # uname -s 00:14:56.377 15:01:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.377 15:01:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.377 15:01:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.377 15:01:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.377 15:01:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.377 15:01:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.377 15:01:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.377 15:01:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.377 15:01:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.377 15:01:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.653 15:01:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:14:56.653 15:01:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:14:56.653 15:01:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.653 15:01:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.653 15:01:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.653 15:01:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.653 15:01:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.653 15:01:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.653 15:01:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.653 15:01:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.653 15:01:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.653 15:01:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.653 15:01:19 -- paths/export.sh@5 -- # export PATH 00:14:56.653 15:01:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.653 15:01:19 -- nvmf/common.sh@46 -- # : 0 00:14:56.653 15:01:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:56.653 15:01:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:56.653 15:01:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:56.653 15:01:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.653 15:01:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.653 15:01:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:56.653 15:01:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:56.653 15:01:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:56.653 15:01:19 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:56.653 15:01:19 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:56.653 15:01:19 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.653 15:01:19 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:56.653 15:01:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:56.653 15:01:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.653 15:01:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:56.653 15:01:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:56.653 15:01:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:56.653 15:01:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.653 15:01:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.653 15:01:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.653 15:01:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:56.653 15:01:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:56.653 15:01:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:56.653 15:01:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:56.653 15:01:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:56.653 15:01:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:56.653 15:01:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.653 15:01:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.653 15:01:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.653 15:01:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:56.653 15:01:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.653 15:01:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.653 15:01:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.654 15:01:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.654 15:01:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.654 15:01:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.654 15:01:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.654 15:01:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.654 15:01:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:56.654 15:01:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:56.654 Cannot find device "nvmf_tgt_br" 00:14:56.654 15:01:20 -- nvmf/common.sh@154 -- # true 00:14:56.654 15:01:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.654 Cannot find device "nvmf_tgt_br2" 00:14:56.654 15:01:20 -- nvmf/common.sh@155 -- # true 00:14:56.654 15:01:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:56.654 15:01:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:56.654 Cannot find device "nvmf_tgt_br" 00:14:56.654 15:01:20 -- nvmf/common.sh@157 -- # true 00:14:56.654 15:01:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:56.654 Cannot find device "nvmf_tgt_br2" 00:14:56.654 15:01:20 -- nvmf/common.sh@158 -- # true 00:14:56.654 15:01:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:56.654 15:01:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:56.654 15:01:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.654 15:01:20 -- nvmf/common.sh@161 -- # true 00:14:56.654 15:01:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.654 15:01:20 -- nvmf/common.sh@162 -- # true 00:14:56.654 15:01:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.654 15:01:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.654 15:01:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.654 15:01:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.654 15:01:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.654 15:01:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.654 15:01:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.654 15:01:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.654 15:01:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.654 15:01:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:56.654 15:01:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:56.654 15:01:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:56.654 15:01:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:56.654 15:01:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.654 15:01:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.654 15:01:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.654 15:01:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:56.654 15:01:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:56.913 15:01:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.913 15:01:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.913 15:01:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.913 15:01:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.913 15:01:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.913 15:01:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:56.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:56.913 00:14:56.913 --- 10.0.0.2 ping statistics --- 00:14:56.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.913 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:56.913 15:01:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:56.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:56.913 00:14:56.913 --- 10.0.0.3 ping statistics --- 00:14:56.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.913 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:56.913 15:01:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:56.913 00:14:56.913 --- 10.0.0.1 ping statistics --- 00:14:56.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.913 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:56.913 15:01:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.913 15:01:20 -- nvmf/common.sh@421 -- # return 0 00:14:56.913 15:01:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:56.913 15:01:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.913 15:01:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:56.913 15:01:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:56.913 15:01:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.913 15:01:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:56.913 15:01:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:56.913 15:01:20 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:56.913 15:01:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:56.913 15:01:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.913 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:14:56.913 15:01:20 -- nvmf/common.sh@469 -- # nvmfpid=85274 00:14:56.913 15:01:20 -- nvmf/common.sh@470 -- # waitforlisten 85274 00:14:56.913 15:01:20 -- common/autotest_common.sh@829 -- # '[' -z 85274 ']' 00:14:56.913 15:01:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.913 15:01:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.913 15:01:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.913 15:01:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.913 15:01:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.913 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:14:56.913 [2024-11-18 15:01:20.379049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:56.913 [2024-11-18 15:01:20.379137] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.172 [2024-11-18 15:01:20.518365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.172 [2024-11-18 15:01:20.595401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.172 [2024-11-18 15:01:20.595555] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.172 [2024-11-18 15:01:20.595568] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.172 [2024-11-18 15:01:20.595577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.172 [2024-11-18 15:01:20.595601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.108 15:01:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.108 15:01:21 -- common/autotest_common.sh@862 -- # return 0 00:14:58.108 15:01:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.108 15:01:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.108 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:58.108 15:01:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.108 15:01:21 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.108 15:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.108 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:58.108 [2024-11-18 15:01:21.430585] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.108 15:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.109 15:01:21 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.109 15:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.109 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:58.109 Malloc0 00:14:58.109 15:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.109 15:01:21 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.109 15:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.109 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:58.109 15:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.109 15:01:21 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.109 15:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.109 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:58.109 15:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.109 15:01:21 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.109 15:01:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.109 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:58.109 [2024-11-18 15:01:21.501520] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.109 15:01:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.109 15:01:21 -- target/queue_depth.sh@30 -- # bdevperf_pid=85330 00:14:58.109 15:01:21 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:58.109 15:01:21 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.109 15:01:21 -- target/queue_depth.sh@33 -- # waitforlisten 85330 /var/tmp/bdevperf.sock 00:14:58.109 15:01:21 -- common/autotest_common.sh@829 -- # '[' -z 85330 ']' 00:14:58.109 15:01:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.109 15:01:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.109 15:01:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.109 15:01:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.109 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:58.109 [2024-11-18 15:01:21.563820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:58.109 [2024-11-18 15:01:21.563923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85330 ] 00:14:58.367 [2024-11-18 15:01:21.705722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.368 [2024-11-18 15:01:21.776194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.935 15:01:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.935 15:01:22 -- common/autotest_common.sh@862 -- # return 0 00:14:58.935 15:01:22 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:58.935 15:01:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.935 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:14:59.194 NVMe0n1 00:14:59.194 15:01:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.194 15:01:22 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:59.194 Running I/O for 10 seconds... 00:15:09.178 00:15:09.178 Latency(us) 00:15:09.178 [2024-11-18T15:01:32.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.178 [2024-11-18T15:01:32.768Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:09.178 Verification LBA range: start 0x0 length 0x4000 00:15:09.178 NVMe0n1 : 10.05 17068.10 66.67 0.00 0.00 59805.98 12749.73 48854.11 00:15:09.178 [2024-11-18T15:01:32.768Z] =================================================================================================================== 00:15:09.178 [2024-11-18T15:01:32.768Z] Total : 17068.10 66.67 0.00 0.00 59805.98 12749.73 48854.11 00:15:09.178 0 00:15:09.178 15:01:32 -- target/queue_depth.sh@39 -- # killprocess 85330 00:15:09.178 15:01:32 -- common/autotest_common.sh@936 -- # '[' -z 85330 ']' 00:15:09.178 15:01:32 -- common/autotest_common.sh@940 -- # kill -0 85330 00:15:09.178 15:01:32 -- common/autotest_common.sh@941 -- # uname 00:15:09.178 15:01:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.178 15:01:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85330 00:15:09.178 15:01:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:09.178 15:01:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:09.178 killing process with pid 85330 00:15:09.178 15:01:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85330' 00:15:09.178 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.178 00:15:09.178 Latency(us) 00:15:09.178 [2024-11-18T15:01:32.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.178 [2024-11-18T15:01:32.768Z] =================================================================================================================== 00:15:09.178 [2024-11-18T15:01:32.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.178 15:01:32 -- common/autotest_common.sh@955 -- # kill 85330 00:15:09.178 15:01:32 -- common/autotest_common.sh@960 -- # wait 85330 00:15:09.437 15:01:32 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:09.437 15:01:32 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:09.437 15:01:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:09.437 15:01:32 -- nvmf/common.sh@116 -- # sync 00:15:09.437 15:01:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:09.437 15:01:32 -- nvmf/common.sh@119 -- # set +e 00:15:09.437 15:01:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:09.437 15:01:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:09.437 rmmod nvme_tcp 00:15:09.437 rmmod nvme_fabrics 00:15:09.696 rmmod nvme_keyring 00:15:09.696 15:01:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:09.696 15:01:33 -- nvmf/common.sh@123 -- # set -e 00:15:09.696 15:01:33 -- nvmf/common.sh@124 -- # return 0 00:15:09.696 15:01:33 -- nvmf/common.sh@477 -- # '[' -n 85274 ']' 00:15:09.696 15:01:33 -- nvmf/common.sh@478 -- # killprocess 85274 00:15:09.696 15:01:33 -- common/autotest_common.sh@936 -- # '[' -z 85274 ']' 00:15:09.696 15:01:33 -- common/autotest_common.sh@940 -- # kill -0 85274 00:15:09.696 15:01:33 -- common/autotest_common.sh@941 -- # uname 00:15:09.696 15:01:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.696 15:01:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85274 00:15:09.696 15:01:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:09.696 15:01:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:09.696 killing process with pid 85274 00:15:09.696 15:01:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85274' 00:15:09.696 15:01:33 -- common/autotest_common.sh@955 -- # kill 85274 00:15:09.696 15:01:33 -- common/autotest_common.sh@960 -- # wait 85274 00:15:09.955 15:01:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:09.955 15:01:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:09.955 15:01:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:09.955 15:01:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:09.955 15:01:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:09.955 15:01:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.955 15:01:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.955 15:01:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.955 15:01:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:09.955 00:15:09.955 real 0m13.642s 00:15:09.955 user 0m22.440s 00:15:09.955 sys 0m2.627s 00:15:09.955 15:01:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:09.955 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:15:09.955 ************************************ 00:15:09.955 END TEST nvmf_queue_depth 00:15:09.955 ************************************ 00:15:09.955 15:01:33 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:09.955 15:01:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:09.955 15:01:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.955 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:15:09.955 ************************************ 00:15:09.955 START TEST nvmf_multipath 00:15:09.955 ************************************ 00:15:09.955 15:01:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:09.955 * Looking for test storage... 00:15:09.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:10.214 15:01:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:10.214 15:01:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:10.214 15:01:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:10.214 15:01:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:10.214 15:01:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:10.215 15:01:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:10.215 15:01:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:10.215 15:01:33 -- scripts/common.sh@335 -- # IFS=.-: 00:15:10.215 15:01:33 -- scripts/common.sh@335 -- # read -ra ver1 00:15:10.215 15:01:33 -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.215 15:01:33 -- scripts/common.sh@336 -- # read -ra ver2 00:15:10.215 15:01:33 -- scripts/common.sh@337 -- # local 'op=<' 00:15:10.215 15:01:33 -- scripts/common.sh@339 -- # ver1_l=2 00:15:10.215 15:01:33 -- scripts/common.sh@340 -- # ver2_l=1 00:15:10.215 15:01:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:10.215 15:01:33 -- scripts/common.sh@343 -- # case "$op" in 00:15:10.215 15:01:33 -- scripts/common.sh@344 -- # : 1 00:15:10.215 15:01:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:10.215 15:01:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.215 15:01:33 -- scripts/common.sh@364 -- # decimal 1 00:15:10.215 15:01:33 -- scripts/common.sh@352 -- # local d=1 00:15:10.215 15:01:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.215 15:01:33 -- scripts/common.sh@354 -- # echo 1 00:15:10.215 15:01:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:10.215 15:01:33 -- scripts/common.sh@365 -- # decimal 2 00:15:10.215 15:01:33 -- scripts/common.sh@352 -- # local d=2 00:15:10.215 15:01:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.215 15:01:33 -- scripts/common.sh@354 -- # echo 2 00:15:10.215 15:01:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:10.215 15:01:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:10.215 15:01:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:10.215 15:01:33 -- scripts/common.sh@367 -- # return 0 00:15:10.215 15:01:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.215 15:01:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:10.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.215 --rc genhtml_branch_coverage=1 00:15:10.215 --rc genhtml_function_coverage=1 00:15:10.215 --rc genhtml_legend=1 00:15:10.215 --rc geninfo_all_blocks=1 00:15:10.215 --rc geninfo_unexecuted_blocks=1 00:15:10.215 00:15:10.215 ' 00:15:10.215 15:01:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:10.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.215 --rc genhtml_branch_coverage=1 00:15:10.215 --rc genhtml_function_coverage=1 00:15:10.215 --rc genhtml_legend=1 00:15:10.215 --rc geninfo_all_blocks=1 00:15:10.215 --rc geninfo_unexecuted_blocks=1 00:15:10.215 00:15:10.215 ' 00:15:10.215 15:01:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:10.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.215 --rc genhtml_branch_coverage=1 00:15:10.215 --rc genhtml_function_coverage=1 00:15:10.215 --rc genhtml_legend=1 00:15:10.215 --rc geninfo_all_blocks=1 00:15:10.215 --rc geninfo_unexecuted_blocks=1 00:15:10.215 00:15:10.215 ' 00:15:10.215 15:01:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:10.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.215 --rc genhtml_branch_coverage=1 00:15:10.215 --rc genhtml_function_coverage=1 00:15:10.215 --rc genhtml_legend=1 00:15:10.215 --rc geninfo_all_blocks=1 00:15:10.215 --rc geninfo_unexecuted_blocks=1 00:15:10.215 00:15:10.215 ' 00:15:10.215 15:01:33 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:10.215 15:01:33 -- nvmf/common.sh@7 -- # uname -s 00:15:10.215 15:01:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.215 15:01:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.215 15:01:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.215 15:01:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.215 15:01:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.215 15:01:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.215 15:01:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.215 15:01:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.215 15:01:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.215 15:01:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.215 15:01:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:15:10.215 15:01:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:15:10.215 15:01:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.215 15:01:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.215 15:01:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:10.215 15:01:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.215 15:01:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.215 15:01:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.215 15:01:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.215 15:01:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.215 15:01:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.215 15:01:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.215 15:01:33 -- paths/export.sh@5 -- # export PATH 00:15:10.215 15:01:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.215 15:01:33 -- nvmf/common.sh@46 -- # : 0 00:15:10.215 15:01:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:10.215 15:01:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:10.215 15:01:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:10.215 15:01:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.215 15:01:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.215 15:01:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:10.215 15:01:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:10.215 15:01:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:10.215 15:01:33 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.215 15:01:33 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.215 15:01:33 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:10.215 15:01:33 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.215 15:01:33 -- target/multipath.sh@43 -- # nvmftestinit 00:15:10.215 15:01:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:10.215 15:01:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.215 15:01:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:10.215 15:01:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:10.215 15:01:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:10.215 15:01:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.215 15:01:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.215 15:01:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.215 15:01:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:10.215 15:01:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:10.215 15:01:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:10.215 15:01:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:10.215 15:01:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:10.215 15:01:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:10.215 15:01:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.215 15:01:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.215 15:01:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:10.215 15:01:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:10.215 15:01:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:10.215 15:01:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:10.215 15:01:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:10.215 15:01:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.215 15:01:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:10.216 15:01:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:10.216 15:01:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:10.216 15:01:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:10.216 15:01:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:10.216 15:01:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:10.216 Cannot find device "nvmf_tgt_br" 00:15:10.216 15:01:33 -- nvmf/common.sh@154 -- # true 00:15:10.216 15:01:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.216 Cannot find device "nvmf_tgt_br2" 00:15:10.216 15:01:33 -- nvmf/common.sh@155 -- # true 00:15:10.216 15:01:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:10.216 15:01:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:10.216 Cannot find device "nvmf_tgt_br" 00:15:10.216 15:01:33 -- nvmf/common.sh@157 -- # true 00:15:10.216 15:01:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:10.216 Cannot find device "nvmf_tgt_br2" 00:15:10.216 15:01:33 -- nvmf/common.sh@158 -- # true 00:15:10.216 15:01:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:10.216 15:01:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:10.475 15:01:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:10.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.475 15:01:33 -- nvmf/common.sh@161 -- # true 00:15:10.475 15:01:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:10.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.475 15:01:33 -- nvmf/common.sh@162 -- # true 00:15:10.475 15:01:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:10.475 15:01:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:10.475 15:01:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:10.475 15:01:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:10.475 15:01:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:10.475 15:01:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:10.475 15:01:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:10.475 15:01:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:10.475 15:01:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:10.475 15:01:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:10.475 15:01:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:10.475 15:01:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:10.475 15:01:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:10.475 15:01:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:10.475 15:01:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:10.475 15:01:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:10.475 15:01:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:10.475 15:01:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:10.475 15:01:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:10.475 15:01:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:10.475 15:01:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:10.475 15:01:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:10.475 15:01:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:10.475 15:01:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:10.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:10.475 00:15:10.475 --- 10.0.0.2 ping statistics --- 00:15:10.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.475 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:10.475 15:01:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:10.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:10.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:15:10.475 00:15:10.475 --- 10.0.0.3 ping statistics --- 00:15:10.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.475 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:10.475 15:01:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:10.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:10.475 00:15:10.475 --- 10.0.0.1 ping statistics --- 00:15:10.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.475 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:10.475 15:01:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.475 15:01:34 -- nvmf/common.sh@421 -- # return 0 00:15:10.475 15:01:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:10.475 15:01:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.475 15:01:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:10.475 15:01:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:10.475 15:01:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.475 15:01:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:10.475 15:01:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:10.475 15:01:34 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:10.475 15:01:34 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:10.475 15:01:34 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:10.475 15:01:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:10.475 15:01:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:10.475 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:15:10.475 15:01:34 -- nvmf/common.sh@469 -- # nvmfpid=85671 00:15:10.475 15:01:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.475 15:01:34 -- nvmf/common.sh@470 -- # waitforlisten 85671 00:15:10.475 15:01:34 -- common/autotest_common.sh@829 -- # '[' -z 85671 ']' 00:15:10.475 15:01:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.475 15:01:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.475 15:01:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.475 15:01:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.475 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:15:10.734 [2024-11-18 15:01:34.104495] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:10.734 [2024-11-18 15:01:34.104580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.734 [2024-11-18 15:01:34.246808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.734 [2024-11-18 15:01:34.317690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:10.734 [2024-11-18 15:01:34.317865] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.734 [2024-11-18 15:01:34.317883] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.734 [2024-11-18 15:01:34.317894] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.734 [2024-11-18 15:01:34.318046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.734 [2024-11-18 15:01:34.318213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.734 [2024-11-18 15:01:34.318816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.734 [2024-11-18 15:01:34.318853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.670 15:01:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.670 15:01:35 -- common/autotest_common.sh@862 -- # return 0 00:15:11.670 15:01:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:11.670 15:01:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.670 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:15:11.670 15:01:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.670 15:01:35 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:11.929 [2024-11-18 15:01:35.406273] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.929 15:01:35 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:12.188 Malloc0 00:15:12.188 15:01:35 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:12.446 15:01:35 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.705 15:01:36 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.964 [2024-11-18 15:01:36.347063] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.964 15:01:36 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:13.224 [2024-11-18 15:01:36.627446] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.224 15:01:36 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:13.483 15:01:36 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:13.743 15:01:37 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.743 15:01:37 -- common/autotest_common.sh@1187 -- # local i=0 00:15:13.743 15:01:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.743 15:01:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:13.743 15:01:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:15.647 15:01:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:15.647 15:01:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:15.647 15:01:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.647 15:01:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:15.647 15:01:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.647 15:01:39 -- common/autotest_common.sh@1197 -- # return 0 00:15:15.647 15:01:39 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:15.647 15:01:39 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:15.647 15:01:39 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:15.647 15:01:39 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:15.647 15:01:39 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:15.647 15:01:39 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:15.647 15:01:39 -- target/multipath.sh@38 -- # return 0 00:15:15.647 15:01:39 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:15.647 15:01:39 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:15.647 15:01:39 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:15.647 15:01:39 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:15.647 15:01:39 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:15.647 15:01:39 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:15.647 15:01:39 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:15.647 15:01:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:15.647 15:01:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:15.647 15:01:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:15.647 15:01:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:15.647 15:01:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:15.647 15:01:39 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:15.647 15:01:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:15.647 15:01:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:15.647 15:01:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:15.647 15:01:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:15.647 15:01:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:15.647 15:01:39 -- target/multipath.sh@85 -- # echo numa 00:15:15.647 15:01:39 -- target/multipath.sh@88 -- # fio_pid=85814 00:15:15.647 15:01:39 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:15.647 15:01:39 -- target/multipath.sh@90 -- # sleep 1 00:15:15.647 [global] 00:15:15.647 thread=1 00:15:15.647 invalidate=1 00:15:15.647 rw=randrw 00:15:15.647 time_based=1 00:15:15.647 runtime=6 00:15:15.647 ioengine=libaio 00:15:15.647 direct=1 00:15:15.647 bs=4096 00:15:15.647 iodepth=128 00:15:15.647 norandommap=0 00:15:15.647 numjobs=1 00:15:15.647 00:15:15.647 verify_dump=1 00:15:15.647 verify_backlog=512 00:15:15.647 verify_state_save=0 00:15:15.647 do_verify=1 00:15:15.647 verify=crc32c-intel 00:15:15.647 [job0] 00:15:15.647 filename=/dev/nvme0n1 00:15:15.647 Could not set queue depth (nvme0n1) 00:15:15.905 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:15.905 fio-3.35 00:15:15.905 Starting 1 thread 00:15:16.843 15:01:40 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:16.843 15:01:40 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:17.101 15:01:40 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:17.101 15:01:40 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:17.101 15:01:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:17.101 15:01:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:17.101 15:01:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:17.101 15:01:40 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:17.101 15:01:40 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:17.101 15:01:40 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:17.101 15:01:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:17.101 15:01:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:17.101 15:01:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:17.102 15:01:40 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:17.102 15:01:40 -- target/multipath.sh@25 -- # sleep 1s 00:15:18.479 15:01:41 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:18.479 15:01:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.479 15:01:41 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:18.479 15:01:41 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:18.479 15:01:41 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:18.738 15:01:42 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:18.738 15:01:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:18.738 15:01:42 -- target/multipath.sh@22 -- # local timeout=20 00:15:18.738 15:01:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:18.738 15:01:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:18.738 15:01:42 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:18.738 15:01:42 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:18.738 15:01:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:18.738 15:01:42 -- target/multipath.sh@22 -- # local timeout=20 00:15:18.738 15:01:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:18.738 15:01:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.738 15:01:42 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:18.738 15:01:42 -- target/multipath.sh@25 -- # sleep 1s 00:15:19.675 15:01:43 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:19.675 15:01:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.675 15:01:43 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:19.675 15:01:43 -- target/multipath.sh@104 -- # wait 85814 00:15:22.208 00:15:22.208 job0: (groupid=0, jobs=1): err= 0: pid=85835: Mon Nov 18 15:01:45 2024 00:15:22.208 read: IOPS=13.1k, BW=51.1MiB/s (53.6MB/s)(307MiB/6005msec) 00:15:22.208 slat (usec): min=3, max=5339, avg=44.22, stdev=196.08 00:15:22.208 clat (usec): min=858, max=13459, avg=6744.03, stdev=1097.36 00:15:22.208 lat (usec): min=1110, max=13467, avg=6788.24, stdev=1105.32 00:15:22.208 clat percentiles (usec): 00:15:22.208 | 1.00th=[ 4146], 5.00th=[ 5145], 10.00th=[ 5538], 20.00th=[ 5997], 00:15:22.208 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6849], 00:15:22.209 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 7963], 95.00th=[ 8586], 00:15:22.209 | 99.00th=[10159], 99.50th=[10945], 99.90th=[12518], 99.95th=[13042], 00:15:22.209 | 99.99th=[13435] 00:15:22.209 bw ( KiB/s): min=11576, max=33896, per=53.27%, avg=27867.64, stdev=6266.65, samples=11 00:15:22.209 iops : min= 2894, max= 8474, avg=6966.91, stdev=1566.66, samples=11 00:15:22.209 write: IOPS=7700, BW=30.1MiB/s (31.5MB/s)(155MiB/5149msec); 0 zone resets 00:15:22.209 slat (usec): min=13, max=1817, avg=54.46, stdev=129.83 00:15:22.209 clat (usec): min=913, max=13638, avg=5849.37, stdev=904.62 00:15:22.209 lat (usec): min=956, max=13663, avg=5903.83, stdev=907.32 00:15:22.209 clat percentiles (usec): 00:15:22.209 | 1.00th=[ 3195], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5276], 00:15:22.209 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 6063], 00:15:22.209 | 70.00th=[ 6194], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 7046], 00:15:22.209 | 99.00th=[ 8586], 99.50th=[ 9372], 99.90th=[10945], 99.95th=[11994], 00:15:22.209 | 99.99th=[13435] 00:15:22.209 bw ( KiB/s): min=12200, max=33680, per=90.50%, avg=27876.91, stdev=5920.36, samples=11 00:15:22.209 iops : min= 3050, max= 8420, avg=6969.18, stdev=1480.08, samples=11 00:15:22.209 lat (usec) : 1000=0.01% 00:15:22.209 lat (msec) : 2=0.05%, 4=1.74%, 10=97.32%, 20=0.88% 00:15:22.209 cpu : usr=5.80%, sys=24.60%, ctx=7201, majf=0, minf=127 00:15:22.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:22.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.209 issued rwts: total=78535,39651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.209 00:15:22.209 Run status group 0 (all jobs): 00:15:22.209 READ: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=307MiB (322MB), run=6005-6005msec 00:15:22.209 WRITE: bw=30.1MiB/s (31.5MB/s), 30.1MiB/s-30.1MiB/s (31.5MB/s-31.5MB/s), io=155MiB (162MB), run=5149-5149msec 00:15:22.209 00:15:22.209 Disk stats (read/write): 00:15:22.209 nvme0n1: ios=77575/38875, merge=0/0, ticks=485025/209945, in_queue=694970, util=98.66% 00:15:22.209 15:01:45 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:22.209 15:01:45 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:22.468 15:01:45 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:22.468 15:01:45 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:22.468 15:01:45 -- target/multipath.sh@22 -- # local timeout=20 00:15:22.468 15:01:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:22.468 15:01:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:22.468 15:01:45 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:22.468 15:01:45 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:22.468 15:01:45 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:22.468 15:01:45 -- target/multipath.sh@22 -- # local timeout=20 00:15:22.468 15:01:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:22.468 15:01:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:22.468 15:01:45 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:22.468 15:01:45 -- target/multipath.sh@25 -- # sleep 1s 00:15:23.405 15:01:46 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:23.405 15:01:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:23.405 15:01:46 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:23.405 15:01:46 -- target/multipath.sh@113 -- # echo round-robin 00:15:23.405 15:01:46 -- target/multipath.sh@116 -- # fio_pid=85969 00:15:23.405 15:01:46 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:23.405 15:01:46 -- target/multipath.sh@118 -- # sleep 1 00:15:23.664 [global] 00:15:23.664 thread=1 00:15:23.664 invalidate=1 00:15:23.664 rw=randrw 00:15:23.664 time_based=1 00:15:23.664 runtime=6 00:15:23.664 ioengine=libaio 00:15:23.664 direct=1 00:15:23.664 bs=4096 00:15:23.664 iodepth=128 00:15:23.664 norandommap=0 00:15:23.664 numjobs=1 00:15:23.664 00:15:23.664 verify_dump=1 00:15:23.664 verify_backlog=512 00:15:23.664 verify_state_save=0 00:15:23.664 do_verify=1 00:15:23.664 verify=crc32c-intel 00:15:23.664 [job0] 00:15:23.664 filename=/dev/nvme0n1 00:15:23.664 Could not set queue depth (nvme0n1) 00:15:23.665 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:23.665 fio-3.35 00:15:23.665 Starting 1 thread 00:15:24.602 15:01:47 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:24.861 15:01:48 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:25.119 15:01:48 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:25.119 15:01:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:25.119 15:01:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:25.119 15:01:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:25.119 15:01:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:25.119 15:01:48 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:25.119 15:01:48 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:25.119 15:01:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:25.119 15:01:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:25.119 15:01:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:25.119 15:01:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:25.119 15:01:48 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:25.119 15:01:48 -- target/multipath.sh@25 -- # sleep 1s 00:15:26.056 15:01:49 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:26.056 15:01:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.056 15:01:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:26.056 15:01:49 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:26.315 15:01:49 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:26.574 15:01:50 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:26.574 15:01:50 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:26.574 15:01:50 -- target/multipath.sh@22 -- # local timeout=20 00:15:26.574 15:01:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:26.574 15:01:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:26.574 15:01:50 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:26.574 15:01:50 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:26.574 15:01:50 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:26.574 15:01:50 -- target/multipath.sh@22 -- # local timeout=20 00:15:26.574 15:01:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:26.574 15:01:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.574 15:01:50 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:26.574 15:01:50 -- target/multipath.sh@25 -- # sleep 1s 00:15:27.511 15:01:51 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:27.511 15:01:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:27.511 15:01:51 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:27.511 15:01:51 -- target/multipath.sh@132 -- # wait 85969 00:15:30.046 00:15:30.046 job0: (groupid=0, jobs=1): err= 0: pid=85991: Mon Nov 18 15:01:53 2024 00:15:30.046 read: IOPS=13.1k, BW=51.4MiB/s (53.9MB/s)(308MiB/6001msec) 00:15:30.046 slat (usec): min=2, max=7487, avg=37.53, stdev=180.94 00:15:30.046 clat (usec): min=502, max=17080, avg=6706.17, stdev=1706.92 00:15:30.046 lat (usec): min=513, max=17088, avg=6743.69, stdev=1710.03 00:15:30.046 clat percentiles (usec): 00:15:30.046 | 1.00th=[ 2212], 5.00th=[ 3818], 10.00th=[ 5014], 20.00th=[ 5735], 00:15:30.046 | 30.00th=[ 5997], 40.00th=[ 6325], 50.00th=[ 6652], 60.00th=[ 6915], 00:15:30.046 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8586], 95.00th=[ 9765], 00:15:30.046 | 99.00th=[11863], 99.50th=[12780], 99.90th=[14877], 99.95th=[15664], 00:15:30.046 | 99.99th=[16319] 00:15:30.046 bw ( KiB/s): min=18344, max=31904, per=51.12%, avg=26889.45, stdev=5480.91, samples=11 00:15:30.046 iops : min= 4586, max= 7976, avg=6722.36, stdev=1370.23, samples=11 00:15:30.046 write: IOPS=7517, BW=29.4MiB/s (30.8MB/s)(158MiB/5374msec); 0 zone resets 00:15:30.046 slat (usec): min=3, max=2820, avg=49.24, stdev=116.80 00:15:30.046 clat (usec): min=959, max=14221, avg=5718.95, stdev=1423.13 00:15:30.046 lat (usec): min=1009, max=14245, avg=5768.19, stdev=1425.83 00:15:30.046 clat percentiles (usec): 00:15:30.046 | 1.00th=[ 2147], 5.00th=[ 2966], 10.00th=[ 3654], 20.00th=[ 4948], 00:15:30.046 | 30.00th=[ 5407], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 6063], 00:15:30.046 | 70.00th=[ 6259], 80.00th=[ 6521], 90.00th=[ 7046], 95.00th=[ 8094], 00:15:30.046 | 99.00th=[ 9503], 99.50th=[10159], 99.90th=[11994], 99.95th=[12780], 00:15:30.046 | 99.99th=[13566] 00:15:30.046 bw ( KiB/s): min=18936, max=32736, per=89.19%, avg=26821.09, stdev=5217.27, samples=11 00:15:30.046 iops : min= 4734, max= 8184, avg=6705.27, stdev=1304.32, samples=11 00:15:30.046 lat (usec) : 750=0.02%, 1000=0.05% 00:15:30.046 lat (msec) : 2=0.65%, 4=7.30%, 10=88.88%, 20=3.09% 00:15:30.046 cpu : usr=6.20%, sys=24.14%, ctx=7793, majf=0, minf=66 00:15:30.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:30.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.046 issued rwts: total=78913,40401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.046 00:15:30.046 Run status group 0 (all jobs): 00:15:30.046 READ: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=308MiB (323MB), run=6001-6001msec 00:15:30.046 WRITE: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=158MiB (165MB), run=5374-5374msec 00:15:30.046 00:15:30.046 Disk stats (read/write): 00:15:30.046 nvme0n1: ios=77990/39563, merge=0/0, ticks=486772/210252, in_queue=697024, util=98.61% 00:15:30.046 15:01:53 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:30.046 15:01:53 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.046 15:01:53 -- common/autotest_common.sh@1208 -- # local i=0 00:15:30.046 15:01:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:30.046 15:01:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.046 15:01:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:30.046 15:01:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.046 15:01:53 -- common/autotest_common.sh@1220 -- # return 0 00:15:30.046 15:01:53 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.306 15:01:53 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:30.306 15:01:53 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:30.306 15:01:53 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:30.306 15:01:53 -- target/multipath.sh@144 -- # nvmftestfini 00:15:30.306 15:01:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:30.306 15:01:53 -- nvmf/common.sh@116 -- # sync 00:15:30.306 15:01:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:30.306 15:01:53 -- nvmf/common.sh@119 -- # set +e 00:15:30.306 15:01:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:30.306 15:01:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:30.306 rmmod nvme_tcp 00:15:30.306 rmmod nvme_fabrics 00:15:30.306 rmmod nvme_keyring 00:15:30.306 15:01:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:30.564 15:01:53 -- nvmf/common.sh@123 -- # set -e 00:15:30.564 15:01:53 -- nvmf/common.sh@124 -- # return 0 00:15:30.564 15:01:53 -- nvmf/common.sh@477 -- # '[' -n 85671 ']' 00:15:30.564 15:01:53 -- nvmf/common.sh@478 -- # killprocess 85671 00:15:30.564 15:01:53 -- common/autotest_common.sh@936 -- # '[' -z 85671 ']' 00:15:30.564 15:01:53 -- common/autotest_common.sh@940 -- # kill -0 85671 00:15:30.564 15:01:53 -- common/autotest_common.sh@941 -- # uname 00:15:30.564 15:01:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:30.564 15:01:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85671 00:15:30.564 killing process with pid 85671 00:15:30.564 15:01:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:30.564 15:01:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:30.564 15:01:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85671' 00:15:30.564 15:01:53 -- common/autotest_common.sh@955 -- # kill 85671 00:15:30.564 15:01:53 -- common/autotest_common.sh@960 -- # wait 85671 00:15:30.564 15:01:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:30.564 15:01:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:30.564 15:01:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:30.564 15:01:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.564 15:01:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:30.564 15:01:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.564 15:01:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.564 15:01:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.843 15:01:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:30.843 00:15:30.843 real 0m20.699s 00:15:30.843 user 1m21.093s 00:15:30.843 sys 0m6.414s 00:15:30.843 15:01:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:30.843 ************************************ 00:15:30.843 END TEST nvmf_multipath 00:15:30.843 ************************************ 00:15:30.843 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:15:30.843 15:01:54 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:30.843 15:01:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:30.843 15:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.843 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:15:30.844 ************************************ 00:15:30.844 START TEST nvmf_zcopy 00:15:30.844 ************************************ 00:15:30.844 15:01:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:30.844 * Looking for test storage... 00:15:30.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:30.844 15:01:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:30.844 15:01:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:30.844 15:01:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:30.844 15:01:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:30.844 15:01:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:30.844 15:01:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:30.844 15:01:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:30.844 15:01:54 -- scripts/common.sh@335 -- # IFS=.-: 00:15:30.844 15:01:54 -- scripts/common.sh@335 -- # read -ra ver1 00:15:30.844 15:01:54 -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.844 15:01:54 -- scripts/common.sh@336 -- # read -ra ver2 00:15:30.844 15:01:54 -- scripts/common.sh@337 -- # local 'op=<' 00:15:30.844 15:01:54 -- scripts/common.sh@339 -- # ver1_l=2 00:15:30.844 15:01:54 -- scripts/common.sh@340 -- # ver2_l=1 00:15:30.844 15:01:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:30.844 15:01:54 -- scripts/common.sh@343 -- # case "$op" in 00:15:30.844 15:01:54 -- scripts/common.sh@344 -- # : 1 00:15:30.844 15:01:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:30.844 15:01:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.844 15:01:54 -- scripts/common.sh@364 -- # decimal 1 00:15:30.844 15:01:54 -- scripts/common.sh@352 -- # local d=1 00:15:30.844 15:01:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.844 15:01:54 -- scripts/common.sh@354 -- # echo 1 00:15:30.844 15:01:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:30.844 15:01:54 -- scripts/common.sh@365 -- # decimal 2 00:15:30.844 15:01:54 -- scripts/common.sh@352 -- # local d=2 00:15:30.844 15:01:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.844 15:01:54 -- scripts/common.sh@354 -- # echo 2 00:15:30.844 15:01:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:30.844 15:01:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:30.844 15:01:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:30.844 15:01:54 -- scripts/common.sh@367 -- # return 0 00:15:30.844 15:01:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.844 15:01:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:30.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.844 --rc genhtml_branch_coverage=1 00:15:30.844 --rc genhtml_function_coverage=1 00:15:30.844 --rc genhtml_legend=1 00:15:30.844 --rc geninfo_all_blocks=1 00:15:30.844 --rc geninfo_unexecuted_blocks=1 00:15:30.844 00:15:30.844 ' 00:15:30.844 15:01:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:30.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.844 --rc genhtml_branch_coverage=1 00:15:30.844 --rc genhtml_function_coverage=1 00:15:30.844 --rc genhtml_legend=1 00:15:30.844 --rc geninfo_all_blocks=1 00:15:30.844 --rc geninfo_unexecuted_blocks=1 00:15:30.844 00:15:30.844 ' 00:15:30.844 15:01:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:30.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.844 --rc genhtml_branch_coverage=1 00:15:30.844 --rc genhtml_function_coverage=1 00:15:30.844 --rc genhtml_legend=1 00:15:30.844 --rc geninfo_all_blocks=1 00:15:30.844 --rc geninfo_unexecuted_blocks=1 00:15:30.844 00:15:30.844 ' 00:15:30.844 15:01:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:30.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.844 --rc genhtml_branch_coverage=1 00:15:30.844 --rc genhtml_function_coverage=1 00:15:30.844 --rc genhtml_legend=1 00:15:30.844 --rc geninfo_all_blocks=1 00:15:30.844 --rc geninfo_unexecuted_blocks=1 00:15:30.844 00:15:30.844 ' 00:15:30.844 15:01:54 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.844 15:01:54 -- nvmf/common.sh@7 -- # uname -s 00:15:30.844 15:01:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.844 15:01:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.844 15:01:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.844 15:01:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.844 15:01:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.844 15:01:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.844 15:01:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.844 15:01:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.844 15:01:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.844 15:01:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.117 15:01:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:15:31.117 15:01:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:15:31.117 15:01:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.117 15:01:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.117 15:01:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.117 15:01:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.117 15:01:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.117 15:01:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.117 15:01:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.117 15:01:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.117 15:01:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.117 15:01:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.117 15:01:54 -- paths/export.sh@5 -- # export PATH 00:15:31.117 15:01:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.117 15:01:54 -- nvmf/common.sh@46 -- # : 0 00:15:31.117 15:01:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:31.117 15:01:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:31.117 15:01:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:31.117 15:01:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.117 15:01:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.117 15:01:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:31.117 15:01:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:31.117 15:01:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:31.117 15:01:54 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:31.117 15:01:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:31.117 15:01:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.117 15:01:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:31.117 15:01:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:31.117 15:01:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:31.117 15:01:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.117 15:01:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.117 15:01:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.117 15:01:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:31.117 15:01:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:31.117 15:01:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:31.117 15:01:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:31.117 15:01:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:31.117 15:01:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:31.117 15:01:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.117 15:01:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.117 15:01:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.117 15:01:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:31.117 15:01:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.117 15:01:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.117 15:01:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.117 15:01:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.117 15:01:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.117 15:01:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.117 15:01:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.117 15:01:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.117 15:01:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:31.117 15:01:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:31.117 Cannot find device "nvmf_tgt_br" 00:15:31.117 15:01:54 -- nvmf/common.sh@154 -- # true 00:15:31.117 15:01:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.117 Cannot find device "nvmf_tgt_br2" 00:15:31.117 15:01:54 -- nvmf/common.sh@155 -- # true 00:15:31.117 15:01:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:31.117 15:01:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:31.117 Cannot find device "nvmf_tgt_br" 00:15:31.117 15:01:54 -- nvmf/common.sh@157 -- # true 00:15:31.117 15:01:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:31.117 Cannot find device "nvmf_tgt_br2" 00:15:31.117 15:01:54 -- nvmf/common.sh@158 -- # true 00:15:31.117 15:01:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:31.117 15:01:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:31.117 15:01:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.117 15:01:54 -- nvmf/common.sh@161 -- # true 00:15:31.117 15:01:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.117 15:01:54 -- nvmf/common.sh@162 -- # true 00:15:31.117 15:01:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.117 15:01:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.117 15:01:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.117 15:01:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.117 15:01:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.117 15:01:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.117 15:01:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.117 15:01:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:31.117 15:01:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:31.117 15:01:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:31.117 15:01:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:31.117 15:01:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:31.117 15:01:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:31.117 15:01:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.117 15:01:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.118 15:01:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.118 15:01:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:31.118 15:01:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:31.118 15:01:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.376 15:01:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.376 15:01:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.376 15:01:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.376 15:01:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.376 15:01:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:31.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:31.376 00:15:31.377 --- 10.0.0.2 ping statistics --- 00:15:31.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.377 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:31.377 15:01:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:31.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:15:31.377 00:15:31.377 --- 10.0.0.3 ping statistics --- 00:15:31.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.377 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:31.377 15:01:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:31.377 00:15:31.377 --- 10.0.0.1 ping statistics --- 00:15:31.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.377 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:31.377 15:01:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.377 15:01:54 -- nvmf/common.sh@421 -- # return 0 00:15:31.377 15:01:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.377 15:01:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.377 15:01:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:31.377 15:01:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:31.377 15:01:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.377 15:01:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:31.377 15:01:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:31.377 15:01:54 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:31.377 15:01:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.377 15:01:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.377 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:15:31.377 15:01:54 -- nvmf/common.sh@469 -- # nvmfpid=86268 00:15:31.377 15:01:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:31.377 15:01:54 -- nvmf/common.sh@470 -- # waitforlisten 86268 00:15:31.377 15:01:54 -- common/autotest_common.sh@829 -- # '[' -z 86268 ']' 00:15:31.377 15:01:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.377 15:01:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.377 15:01:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.377 15:01:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.377 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:15:31.377 [2024-11-18 15:01:54.837476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:31.377 [2024-11-18 15:01:54.837562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.635 [2024-11-18 15:01:54.980540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.635 [2024-11-18 15:01:55.051795] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:31.635 [2024-11-18 15:01:55.051979] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.635 [2024-11-18 15:01:55.051997] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.635 [2024-11-18 15:01:55.052009] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.635 [2024-11-18 15:01:55.052042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.203 15:01:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.203 15:01:55 -- common/autotest_common.sh@862 -- # return 0 00:15:32.203 15:01:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:32.203 15:01:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.203 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.203 15:01:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.203 15:01:55 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:32.203 15:01:55 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:32.203 15:01:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.203 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.203 [2024-11-18 15:01:55.781879] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.203 15:01:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.203 15:01:55 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:32.203 15:01:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.203 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.462 15:01:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.462 15:01:55 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.462 15:01:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.462 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.462 [2024-11-18 15:01:55.802015] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.462 15:01:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.462 15:01:55 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.462 15:01:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.462 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.462 15:01:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.462 15:01:55 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:32.462 15:01:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.462 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.462 malloc0 00:15:32.462 15:01:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.462 15:01:55 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:32.462 15:01:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.462 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:15:32.462 15:01:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.462 15:01:55 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:32.462 15:01:55 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:32.462 15:01:55 -- nvmf/common.sh@520 -- # config=() 00:15:32.462 15:01:55 -- nvmf/common.sh@520 -- # local subsystem config 00:15:32.462 15:01:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:32.462 15:01:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:32.462 { 00:15:32.462 "params": { 00:15:32.462 "name": "Nvme$subsystem", 00:15:32.462 "trtype": "$TEST_TRANSPORT", 00:15:32.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:32.462 "adrfam": "ipv4", 00:15:32.462 "trsvcid": "$NVMF_PORT", 00:15:32.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:32.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:32.462 "hdgst": ${hdgst:-false}, 00:15:32.462 "ddgst": ${ddgst:-false} 00:15:32.462 }, 00:15:32.462 "method": "bdev_nvme_attach_controller" 00:15:32.462 } 00:15:32.462 EOF 00:15:32.462 )") 00:15:32.462 15:01:55 -- nvmf/common.sh@542 -- # cat 00:15:32.462 15:01:55 -- nvmf/common.sh@544 -- # jq . 00:15:32.462 15:01:55 -- nvmf/common.sh@545 -- # IFS=, 00:15:32.462 15:01:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:32.462 "params": { 00:15:32.462 "name": "Nvme1", 00:15:32.462 "trtype": "tcp", 00:15:32.462 "traddr": "10.0.0.2", 00:15:32.462 "adrfam": "ipv4", 00:15:32.462 "trsvcid": "4420", 00:15:32.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.462 "hdgst": false, 00:15:32.462 "ddgst": false 00:15:32.462 }, 00:15:32.462 "method": "bdev_nvme_attach_controller" 00:15:32.462 }' 00:15:32.462 [2024-11-18 15:01:55.887777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:32.462 [2024-11-18 15:01:55.887890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86324 ] 00:15:32.462 [2024-11-18 15:01:56.020732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.722 [2024-11-18 15:01:56.072105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.722 Running I/O for 10 seconds... 00:15:42.696 00:15:42.696 Latency(us) 00:15:42.696 [2024-11-18T15:02:06.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.696 [2024-11-18T15:02:06.286Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:42.696 Verification LBA range: start 0x0 length 0x1000 00:15:42.696 Nvme1n1 : 10.01 11183.30 87.37 0.00 0.00 11417.90 1273.48 17873.45 00:15:42.696 [2024-11-18T15:02:06.286Z] =================================================================================================================== 00:15:42.696 [2024-11-18T15:02:06.286Z] Total : 11183.30 87.37 0.00 0.00 11417.90 1273.48 17873.45 00:15:42.955 15:02:06 -- target/zcopy.sh@39 -- # perfpid=86443 00:15:42.955 15:02:06 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:42.955 15:02:06 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:42.955 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:15:42.955 15:02:06 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:42.955 15:02:06 -- nvmf/common.sh@520 -- # config=() 00:15:42.955 15:02:06 -- nvmf/common.sh@520 -- # local subsystem config 00:15:42.955 15:02:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:42.955 15:02:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:42.955 { 00:15:42.955 "params": { 00:15:42.955 "name": "Nvme$subsystem", 00:15:42.955 "trtype": "$TEST_TRANSPORT", 00:15:42.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:42.955 "adrfam": "ipv4", 00:15:42.955 "trsvcid": "$NVMF_PORT", 00:15:42.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:42.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:42.955 "hdgst": ${hdgst:-false}, 00:15:42.955 "ddgst": ${ddgst:-false} 00:15:42.955 }, 00:15:42.955 "method": "bdev_nvme_attach_controller" 00:15:42.955 } 00:15:42.955 EOF 00:15:42.955 )") 00:15:42.955 15:02:06 -- nvmf/common.sh@542 -- # cat 00:15:42.955 [2024-11-18 15:02:06.506300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.955 [2024-11-18 15:02:06.506398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.955 15:02:06 -- nvmf/common.sh@544 -- # jq . 00:15:42.955 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.955 15:02:06 -- nvmf/common.sh@545 -- # IFS=, 00:15:42.955 15:02:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:42.955 "params": { 00:15:42.955 "name": "Nvme1", 00:15:42.955 "trtype": "tcp", 00:15:42.955 "traddr": "10.0.0.2", 00:15:42.955 "adrfam": "ipv4", 00:15:42.955 "trsvcid": "4420", 00:15:42.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.955 "hdgst": false, 00:15:42.955 "ddgst": false 00:15:42.955 }, 00:15:42.955 "method": "bdev_nvme_attach_controller" 00:15:42.955 }' 00:15:42.955 [2024-11-18 15:02:06.518268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.955 [2024-11-18 15:02:06.518311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.955 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.955 [2024-11-18 15:02:06.530265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.955 [2024-11-18 15:02:06.530309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.955 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.955 [2024-11-18 15:02:06.538200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:42.955 [2024-11-18 15:02:06.538303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86443 ] 00:15:43.215 [2024-11-18 15:02:06.542269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.542311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.554271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.554338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.566258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.566301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.578260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.578302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.590261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.590304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.602264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.602304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.614267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.614309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.626269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.626312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.638273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.638340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.650274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.650342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.662278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.662343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.669027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.215 [2024-11-18 15:02:06.674280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.674345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.686282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.686347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.698283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.698348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.710284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.710349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.722289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.722355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.730668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.215 [2024-11-18 15:02:06.734292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.734357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.215 [2024-11-18 15:02:06.746296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.215 [2024-11-18 15:02:06.746362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.215 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.216 [2024-11-18 15:02:06.758299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.216 [2024-11-18 15:02:06.758365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.216 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.216 [2024-11-18 15:02:06.770300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.216 [2024-11-18 15:02:06.770366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.216 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.216 [2024-11-18 15:02:06.782322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.216 [2024-11-18 15:02:06.782388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.216 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.216 [2024-11-18 15:02:06.794308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.216 [2024-11-18 15:02:06.794375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.216 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.806308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.806367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.818312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.818361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.830323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.830380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.842343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.842386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.854367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.854413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.866363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.866410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.878370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.878417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.890366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.890414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.902368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.902415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.914378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.914427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 Running I/O for 5 seconds... 00:15:43.475 [2024-11-18 15:02:06.926374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.926419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.942331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.942364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.957680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.957715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.974464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.974498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:06.990751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:06.990785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:07.006867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:07.006902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:07.019301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:07.019343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.475 [2024-11-18 15:02:07.029923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.475 [2024-11-18 15:02:07.029958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.475 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.476 [2024-11-18 15:02:07.045737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.476 [2024-11-18 15:02:07.045771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.476 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.062350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.735 [2024-11-18 15:02:07.062383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.735 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.078886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.735 [2024-11-18 15:02:07.078920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.735 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.094898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.735 [2024-11-18 15:02:07.094932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.735 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.111017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.735 [2024-11-18 15:02:07.111051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.735 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.127440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.735 [2024-11-18 15:02:07.127471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.735 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.139865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.735 [2024-11-18 15:02:07.139899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.735 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.151276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.735 [2024-11-18 15:02:07.151369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.735 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.166882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.735 [2024-11-18 15:02:07.166917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.735 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.735 [2024-11-18 15:02:07.184244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.184279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.199609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.199644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.210805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.210839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.226187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.226222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.243425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.243456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.258834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.258870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.270515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.270548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.286519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.286569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.302427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.302476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.736 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.736 [2024-11-18 15:02:07.318971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.736 [2024-11-18 15:02:07.319006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.995 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.995 [2024-11-18 15:02:07.335899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.995 [2024-11-18 15:02:07.335933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.995 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.995 [2024-11-18 15:02:07.352877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.995 [2024-11-18 15:02:07.352912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.995 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.995 [2024-11-18 15:02:07.369811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.995 [2024-11-18 15:02:07.369846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.995 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.995 [2024-11-18 15:02:07.386069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.995 [2024-11-18 15:02:07.386103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.995 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.995 [2024-11-18 15:02:07.403852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.995 [2024-11-18 15:02:07.403900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.995 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.995 [2024-11-18 15:02:07.418626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.995 [2024-11-18 15:02:07.418677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.995 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.995 [2024-11-18 15:02:07.434813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.995 [2024-11-18 15:02:07.434848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.995 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.995 [2024-11-18 15:02:07.450152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.995 [2024-11-18 15:02:07.450188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.996 [2024-11-18 15:02:07.466311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.996 [2024-11-18 15:02:07.466370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.996 [2024-11-18 15:02:07.477008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.996 [2024-11-18 15:02:07.477041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.996 [2024-11-18 15:02:07.493240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.996 [2024-11-18 15:02:07.493274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.996 [2024-11-18 15:02:07.509358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.996 [2024-11-18 15:02:07.509391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.996 [2024-11-18 15:02:07.520212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.996 [2024-11-18 15:02:07.520243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.996 [2024-11-18 15:02:07.535994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.996 [2024-11-18 15:02:07.536028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.996 [2024-11-18 15:02:07.552234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.996 [2024-11-18 15:02:07.552269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.996 [2024-11-18 15:02:07.568062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.996 [2024-11-18 15:02:07.568096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.996 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.255 [2024-11-18 15:02:07.583413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.255 [2024-11-18 15:02:07.583446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.255 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.255 [2024-11-18 15:02:07.599596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.255 [2024-11-18 15:02:07.599630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.255 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.255 [2024-11-18 15:02:07.615786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.255 [2024-11-18 15:02:07.615820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.255 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.255 [2024-11-18 15:02:07.632072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.255 [2024-11-18 15:02:07.632107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.255 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.255 [2024-11-18 15:02:07.643650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.255 [2024-11-18 15:02:07.643698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.255 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.659718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.659753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.675697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.675731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.691584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.691618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.708290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.708336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.724166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.724200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.736580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.736632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.751995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.752029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.768687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.768754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.785332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.785365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.802053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.802086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.819002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.819036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.256 [2024-11-18 15:02:07.834984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.256 [2024-11-18 15:02:07.835019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.256 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.846191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.846225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.853686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.853717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.865406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.865453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.874299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.874359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.885347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.885396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.900862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.900896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.916729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.916764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.928911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.928943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.944301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.944360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.960719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.960755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.971581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.971631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:07.986974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:07.987008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:08.003419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:08.003453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:08.019187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:08.019222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:08.035250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:08.035284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:08.046919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:08.046953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:08.063534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:08.063568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:08.079519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:08.079553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.516 [2024-11-18 15:02:08.095769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.516 [2024-11-18 15:02:08.095804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.516 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.111848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.111882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.128129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.128163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.145227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.145262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.161300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.161343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.172664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.172729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.186955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.186989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.202229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.202263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.218779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.218814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.234795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.234830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.251245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.251279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.267299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.267358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.281770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.281805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.297542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.297594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.314364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.314428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.330826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.330860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.341808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.341842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.776 [2024-11-18 15:02:08.350486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.776 [2024-11-18 15:02:08.350534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.776 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.035 [2024-11-18 15:02:08.364381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.035 [2024-11-18 15:02:08.364414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.035 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.035 [2024-11-18 15:02:08.372880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.035 [2024-11-18 15:02:08.372914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.035 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.035 [2024-11-18 15:02:08.386551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.035 [2024-11-18 15:02:08.386587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.035 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.035 [2024-11-18 15:02:08.395424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.035 [2024-11-18 15:02:08.395469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.035 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.035 [2024-11-18 15:02:08.408887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.035 [2024-11-18 15:02:08.408935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.035 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.035 [2024-11-18 15:02:08.426443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.035 [2024-11-18 15:02:08.426479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.035 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.436539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.436588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.450232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.450266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.464624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.464675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.475213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.475247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.490941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.490976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.507036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.507070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.523006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.523040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.538452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.538500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.554885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.554919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.572158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.572192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.588337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.588371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.605685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.605719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.036 [2024-11-18 15:02:08.616196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.036 [2024-11-18 15:02:08.616228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.036 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.632276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.632310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.647954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.647988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.659041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.659075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.674597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.674631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.691015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.691051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.708132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.708166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.724188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.724222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.740338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.740384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.751814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.751848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.767240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.767275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.783175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.783210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.799371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.799406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.816124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.816158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.831765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.831799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.848097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.848132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.864105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.864140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.295 [2024-11-18 15:02:08.877681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.295 [2024-11-18 15:02:08.877714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.295 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:08.892554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:08.892603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:08.908687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:08.908736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:08.923500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:08.923534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:08.940325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:08.940359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:08.956038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:08.956072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:08.972686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:08.972754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:08.989092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:08.989125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.005563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.005599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.022600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.022635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.038729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.038763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.055249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.055283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.071523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.071557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.088440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.088489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.105249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.105284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.120639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.120689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.555 [2024-11-18 15:02:09.132662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.555 [2024-11-18 15:02:09.132727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.555 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.147345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.147394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.162403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.162453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.178571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.178622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.192790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.192824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.208103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.208137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.224606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.224656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.240771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.240805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.256943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.256978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.273179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.273213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.284513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.284562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.299945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.299978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.317001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.317037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.333182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.333216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.813 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.813 [2024-11-18 15:02:09.349606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.813 [2024-11-18 15:02:09.349642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.814 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.814 [2024-11-18 15:02:09.366307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.814 [2024-11-18 15:02:09.366353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.814 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.814 [2024-11-18 15:02:09.383086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.814 [2024-11-18 15:02:09.383121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.814 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.814 [2024-11-18 15:02:09.399565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.814 [2024-11-18 15:02:09.399600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.416014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.416049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.432814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.432864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.450555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.450587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.465377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.465419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.480969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.481003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.498210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.498256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.514822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.514855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.530465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.530499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.547151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.547186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.563522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.563556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.580241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.580287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.597720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.597754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.614006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.614040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.630327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.630360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.072 [2024-11-18 15:02:09.646435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.072 [2024-11-18 15:02:09.646469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.072 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.331 [2024-11-18 15:02:09.662883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.662918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.679339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.679387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.696151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.696185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.712684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.712735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.729022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.729057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.745766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.745800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.761992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.762026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.778089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.778124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.789533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.789582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.804831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.804865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.815540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.815588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.831582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.831616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.842190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.842224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.857622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.857656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.874129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.874164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.890004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.890037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.332 [2024-11-18 15:02:09.901902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.332 [2024-11-18 15:02:09.901936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.332 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:09.918520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:09.918560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:09.929520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:09.929569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:09.944728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:09.944764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:09.955462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:09.955511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:09.970072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:09.970106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:09.978667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:09.978700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:09.994857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:09.994892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:10.010832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:10.010885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:10.028908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.592 [2024-11-18 15:02:10.028977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.592 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.592 [2024-11-18 15:02:10.043659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.043695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.052461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.052509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.061738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.061788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.071136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.071170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.080908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.080941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.090276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.090306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.099396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.099445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.113055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.113089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.129118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.129153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.145589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.145624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.162126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.162161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.593 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.593 [2024-11-18 15:02:10.178735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.593 [2024-11-18 15:02:10.178770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.852 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.852 [2024-11-18 15:02:10.196821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.852 [2024-11-18 15:02:10.196871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.852 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.852 [2024-11-18 15:02:10.206634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.852 [2024-11-18 15:02:10.206669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.852 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.852 [2024-11-18 15:02:10.215867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.852 [2024-11-18 15:02:10.215903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.852 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.852 [2024-11-18 15:02:10.225499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.852 [2024-11-18 15:02:10.225549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.852 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.852 [2024-11-18 15:02:10.238504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.852 [2024-11-18 15:02:10.238553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.255333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.255367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.271616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.271648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.287685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.287719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.301643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.301677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.317071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.317104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.327979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.328028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.343870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.343904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.360281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.360327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.376557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.376607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.392943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.392977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.409474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.409507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.853 [2024-11-18 15:02:10.426392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.853 [2024-11-18 15:02:10.426425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.853 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.112 [2024-11-18 15:02:10.442221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.112 [2024-11-18 15:02:10.442255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.112 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.459399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.459458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.474874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.474909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.484034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.484083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.499254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.499288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.514454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.514488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.528920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.528971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.543948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.543982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.560716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.560783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.577149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.577183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.594031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.594081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.609615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.609665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.624420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.624493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.636471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.636528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.645216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.645250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.654261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.654294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.668279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.668311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.113 [2024-11-18 15:02:10.684172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.113 [2024-11-18 15:02:10.684206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.113 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.700094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.700129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.716657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.716707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.733700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.733764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.744490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.744541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.760675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.760725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.776237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.776273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.787791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.787824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.795610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.795657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.806805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.806839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.815403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.815451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.827129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.827163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.842453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.842486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.854227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.854260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.866546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.866581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.875382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.875429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.888168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.888201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.903747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.903781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.920150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.920184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.936834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.936867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.373 [2024-11-18 15:02:10.953151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.373 [2024-11-18 15:02:10.953185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.373 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:10.969736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.633 [2024-11-18 15:02:10.969769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.633 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:10.986826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.633 [2024-11-18 15:02:10.986861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.633 2024/11/18 15:02:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:11.003237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.633 [2024-11-18 15:02:11.003271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.633 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:11.019718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.633 [2024-11-18 15:02:11.019751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.633 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:11.036485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.633 [2024-11-18 15:02:11.036531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.633 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:11.052517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.633 [2024-11-18 15:02:11.052568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.633 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:11.069197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.633 [2024-11-18 15:02:11.069231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.633 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:11.085908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.633 [2024-11-18 15:02:11.085943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.633 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.633 [2024-11-18 15:02:11.102660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.634 [2024-11-18 15:02:11.102695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.634 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.634 [2024-11-18 15:02:11.118752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.634 [2024-11-18 15:02:11.118787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.634 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.634 [2024-11-18 15:02:11.135157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.634 [2024-11-18 15:02:11.135191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.634 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.634 [2024-11-18 15:02:11.151466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.634 [2024-11-18 15:02:11.151516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.634 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.634 [2024-11-18 15:02:11.167656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.634 [2024-11-18 15:02:11.167690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.634 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.634 [2024-11-18 15:02:11.184255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.634 [2024-11-18 15:02:11.184290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.634 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.634 [2024-11-18 15:02:11.200771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.634 [2024-11-18 15:02:11.200805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.634 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.634 [2024-11-18 15:02:11.217046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.634 [2024-11-18 15:02:11.217081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.233427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.233476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.249963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.249998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.260977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.261012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.276498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.276549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.293182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.293218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.309408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.309441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.325908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.325942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.341925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.341960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.359131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.359166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.375837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.375873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.893 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.893 [2024-11-18 15:02:11.392566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.893 [2024-11-18 15:02:11.392626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.894 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.894 [2024-11-18 15:02:11.408731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.894 [2024-11-18 15:02:11.408796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.894 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.894 [2024-11-18 15:02:11.424979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.894 [2024-11-18 15:02:11.425013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.894 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.894 [2024-11-18 15:02:11.436694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.894 [2024-11-18 15:02:11.436730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.894 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.894 [2024-11-18 15:02:11.453145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.894 [2024-11-18 15:02:11.453179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.894 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.894 [2024-11-18 15:02:11.468791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.894 [2024-11-18 15:02:11.468842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.894 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.480931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.480982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.497266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.497300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.514405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.514439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.530653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.530687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.547226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.547260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.563412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.563446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.579253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.579287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.590173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.590207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.606398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.606448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.622409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.622442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.638892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.638927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.655048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.655082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.671607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.671641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.688210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.688244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.704375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.704430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.721369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.721404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.153 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.153 [2024-11-18 15:02:11.737895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.153 [2024-11-18 15:02:11.737945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.413 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.413 [2024-11-18 15:02:11.753271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.413 [2024-11-18 15:02:11.753305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.413 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.413 [2024-11-18 15:02:11.764090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.764125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.780509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.780560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.795371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.795420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.804253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.804300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.817233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.817267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.825495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.825546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.839327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.839359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.848314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.848429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.862208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.862242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.877930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.877964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.894619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.894669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.910864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.910899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.926823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.926856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 00:15:48.414 Latency(us) 00:15:48.414 [2024-11-18T15:02:12.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.414 [2024-11-18T15:02:12.004Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:48.414 Nvme1n1 : 5.01 14041.87 109.70 0.00 0.00 9105.45 3798.11 19541.64 00:15:48.414 [2024-11-18T15:02:12.004Z] =================================================================================================================== 00:15:48.414 [2024-11-18T15:02:12.004Z] Total : 14041.87 109.70 0.00 0.00 9105.45 3798.11 19541.64 00:15:48.414 [2024-11-18 15:02:11.935824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.935872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.947801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.947846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.959801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.959845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.971801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.971844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.983804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.983847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.414 2024/11/18 15:02:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.414 [2024-11-18 15:02:11.995806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.414 [2024-11-18 15:02:11.995848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.007808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.007849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.019813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.019856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.031817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.031859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.043818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.043860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.055821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.055864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.067826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.067868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.079829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.079872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.091831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.091873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.103835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.103878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.115837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.115877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.127844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.127886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.139863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.139904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.151849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.151888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.159853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.159895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.171862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.171900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 [2024-11-18 15:02:12.179855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.674 [2024-11-18 15:02:12.179891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.674 2024/11/18 15:02:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.674 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86443) - No such process 00:15:48.674 15:02:12 -- target/zcopy.sh@49 -- # wait 86443 00:15:48.674 15:02:12 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.674 15:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.674 15:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:48.674 15:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.674 15:02:12 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:48.674 15:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.674 15:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:48.674 delay0 00:15:48.674 15:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.674 15:02:12 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:48.674 15:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.674 15:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:48.674 15:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.674 15:02:12 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:48.933 [2024-11-18 15:02:12.374883] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:55.499 Initializing NVMe Controllers 00:15:55.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:55.499 Initialization complete. Launching workers. 00:15:55.499 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:15:55.499 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:15:55.499 success 166, unsuccess 190, failed 0 00:15:55.499 15:02:18 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:55.499 15:02:18 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:55.499 15:02:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:55.499 15:02:18 -- nvmf/common.sh@116 -- # sync 00:15:55.499 15:02:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:55.499 15:02:18 -- nvmf/common.sh@119 -- # set +e 00:15:55.499 15:02:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:55.499 15:02:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:55.499 rmmod nvme_tcp 00:15:55.499 rmmod nvme_fabrics 00:15:55.499 rmmod nvme_keyring 00:15:55.499 15:02:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:55.499 15:02:18 -- nvmf/common.sh@123 -- # set -e 00:15:55.499 15:02:18 -- nvmf/common.sh@124 -- # return 0 00:15:55.499 15:02:18 -- nvmf/common.sh@477 -- # '[' -n 86268 ']' 00:15:55.499 15:02:18 -- nvmf/common.sh@478 -- # killprocess 86268 00:15:55.499 15:02:18 -- common/autotest_common.sh@936 -- # '[' -z 86268 ']' 00:15:55.499 15:02:18 -- common/autotest_common.sh@940 -- # kill -0 86268 00:15:55.499 15:02:18 -- common/autotest_common.sh@941 -- # uname 00:15:55.499 15:02:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.499 15:02:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86268 00:15:55.499 15:02:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:55.499 15:02:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:55.499 killing process with pid 86268 00:15:55.499 15:02:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86268' 00:15:55.499 15:02:18 -- common/autotest_common.sh@955 -- # kill 86268 00:15:55.499 15:02:18 -- common/autotest_common.sh@960 -- # wait 86268 00:15:55.499 15:02:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:55.499 15:02:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:55.499 15:02:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:55.499 15:02:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.499 15:02:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:55.499 15:02:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.499 15:02:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.499 15:02:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.499 15:02:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:55.499 00:15:55.499 real 0m24.607s 00:15:55.499 user 0m38.590s 00:15:55.499 sys 0m7.226s 00:15:55.499 15:02:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:55.499 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:15:55.499 ************************************ 00:15:55.499 END TEST nvmf_zcopy 00:15:55.499 ************************************ 00:15:55.499 15:02:18 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:55.499 15:02:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:55.499 15:02:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:55.499 15:02:18 -- common/autotest_common.sh@10 -- # set +x 00:15:55.499 ************************************ 00:15:55.499 START TEST nvmf_nmic 00:15:55.499 ************************************ 00:15:55.499 15:02:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:55.499 * Looking for test storage... 00:15:55.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:55.499 15:02:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:55.499 15:02:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:55.499 15:02:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:55.499 15:02:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:55.499 15:02:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:55.499 15:02:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:55.499 15:02:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:55.499 15:02:19 -- scripts/common.sh@335 -- # IFS=.-: 00:15:55.499 15:02:19 -- scripts/common.sh@335 -- # read -ra ver1 00:15:55.499 15:02:19 -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.499 15:02:19 -- scripts/common.sh@336 -- # read -ra ver2 00:15:55.499 15:02:19 -- scripts/common.sh@337 -- # local 'op=<' 00:15:55.499 15:02:19 -- scripts/common.sh@339 -- # ver1_l=2 00:15:55.499 15:02:19 -- scripts/common.sh@340 -- # ver2_l=1 00:15:55.499 15:02:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:55.499 15:02:19 -- scripts/common.sh@343 -- # case "$op" in 00:15:55.499 15:02:19 -- scripts/common.sh@344 -- # : 1 00:15:55.499 15:02:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:55.499 15:02:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.499 15:02:19 -- scripts/common.sh@364 -- # decimal 1 00:15:55.499 15:02:19 -- scripts/common.sh@352 -- # local d=1 00:15:55.499 15:02:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.499 15:02:19 -- scripts/common.sh@354 -- # echo 1 00:15:55.499 15:02:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:55.499 15:02:19 -- scripts/common.sh@365 -- # decimal 2 00:15:55.499 15:02:19 -- scripts/common.sh@352 -- # local d=2 00:15:55.499 15:02:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.499 15:02:19 -- scripts/common.sh@354 -- # echo 2 00:15:55.499 15:02:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:55.499 15:02:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:55.499 15:02:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:55.499 15:02:19 -- scripts/common.sh@367 -- # return 0 00:15:55.499 15:02:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.499 15:02:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:55.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.499 --rc genhtml_branch_coverage=1 00:15:55.499 --rc genhtml_function_coverage=1 00:15:55.499 --rc genhtml_legend=1 00:15:55.499 --rc geninfo_all_blocks=1 00:15:55.499 --rc geninfo_unexecuted_blocks=1 00:15:55.499 00:15:55.499 ' 00:15:55.499 15:02:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:55.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.499 --rc genhtml_branch_coverage=1 00:15:55.499 --rc genhtml_function_coverage=1 00:15:55.499 --rc genhtml_legend=1 00:15:55.499 --rc geninfo_all_blocks=1 00:15:55.499 --rc geninfo_unexecuted_blocks=1 00:15:55.499 00:15:55.499 ' 00:15:55.499 15:02:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:55.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.499 --rc genhtml_branch_coverage=1 00:15:55.499 --rc genhtml_function_coverage=1 00:15:55.499 --rc genhtml_legend=1 00:15:55.499 --rc geninfo_all_blocks=1 00:15:55.499 --rc geninfo_unexecuted_blocks=1 00:15:55.499 00:15:55.499 ' 00:15:55.499 15:02:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:55.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.500 --rc genhtml_branch_coverage=1 00:15:55.500 --rc genhtml_function_coverage=1 00:15:55.500 --rc genhtml_legend=1 00:15:55.500 --rc geninfo_all_blocks=1 00:15:55.500 --rc geninfo_unexecuted_blocks=1 00:15:55.500 00:15:55.500 ' 00:15:55.500 15:02:19 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.500 15:02:19 -- nvmf/common.sh@7 -- # uname -s 00:15:55.768 15:02:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.768 15:02:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.768 15:02:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.768 15:02:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.768 15:02:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.768 15:02:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.768 15:02:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.768 15:02:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.768 15:02:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.768 15:02:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.768 15:02:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:15:55.768 15:02:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:15:55.768 15:02:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.768 15:02:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.768 15:02:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.768 15:02:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.768 15:02:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.769 15:02:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.769 15:02:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.769 15:02:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.769 15:02:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.769 15:02:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.769 15:02:19 -- paths/export.sh@5 -- # export PATH 00:15:55.769 15:02:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.769 15:02:19 -- nvmf/common.sh@46 -- # : 0 00:15:55.769 15:02:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:55.769 15:02:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:55.769 15:02:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:55.769 15:02:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.769 15:02:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.769 15:02:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:55.769 15:02:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:55.769 15:02:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:55.769 15:02:19 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.769 15:02:19 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.769 15:02:19 -- target/nmic.sh@14 -- # nvmftestinit 00:15:55.769 15:02:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:55.769 15:02:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.769 15:02:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:55.769 15:02:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:55.769 15:02:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:55.769 15:02:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.769 15:02:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.769 15:02:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.769 15:02:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:55.769 15:02:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:55.769 15:02:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:55.769 15:02:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:55.769 15:02:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:55.769 15:02:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:55.769 15:02:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.769 15:02:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.769 15:02:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:55.769 15:02:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:55.769 15:02:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.769 15:02:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.769 15:02:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.769 15:02:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.769 15:02:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.769 15:02:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.769 15:02:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.769 15:02:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.769 15:02:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:55.769 15:02:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:55.769 Cannot find device "nvmf_tgt_br" 00:15:55.769 15:02:19 -- nvmf/common.sh@154 -- # true 00:15:55.769 15:02:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.769 Cannot find device "nvmf_tgt_br2" 00:15:55.769 15:02:19 -- nvmf/common.sh@155 -- # true 00:15:55.769 15:02:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:55.769 15:02:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:55.769 Cannot find device "nvmf_tgt_br" 00:15:55.769 15:02:19 -- nvmf/common.sh@157 -- # true 00:15:55.769 15:02:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:55.769 Cannot find device "nvmf_tgt_br2" 00:15:55.769 15:02:19 -- nvmf/common.sh@158 -- # true 00:15:55.769 15:02:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:55.769 15:02:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:55.769 15:02:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.769 15:02:19 -- nvmf/common.sh@161 -- # true 00:15:55.769 15:02:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.769 15:02:19 -- nvmf/common.sh@162 -- # true 00:15:55.769 15:02:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.769 15:02:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.769 15:02:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.769 15:02:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.769 15:02:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.769 15:02:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.769 15:02:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.769 15:02:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:55.769 15:02:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:55.769 15:02:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:55.769 15:02:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:55.769 15:02:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:55.769 15:02:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:56.027 15:02:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.027 15:02:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.027 15:02:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.027 15:02:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:56.027 15:02:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:56.027 15:02:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.027 15:02:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.027 15:02:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.027 15:02:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.027 15:02:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.027 15:02:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:56.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:56.027 00:15:56.027 --- 10.0.0.2 ping statistics --- 00:15:56.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.027 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:56.027 15:02:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:56.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:56.027 00:15:56.027 --- 10.0.0.3 ping statistics --- 00:15:56.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.027 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:56.027 15:02:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:56.027 00:15:56.027 --- 10.0.0.1 ping statistics --- 00:15:56.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.027 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:56.027 15:02:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.027 15:02:19 -- nvmf/common.sh@421 -- # return 0 00:15:56.027 15:02:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:56.027 15:02:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.027 15:02:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:56.027 15:02:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:56.027 15:02:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.027 15:02:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:56.027 15:02:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:56.027 15:02:19 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:56.027 15:02:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:56.027 15:02:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:56.027 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:56.027 15:02:19 -- nvmf/common.sh@469 -- # nvmfpid=86763 00:15:56.027 15:02:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.027 15:02:19 -- nvmf/common.sh@470 -- # waitforlisten 86763 00:15:56.027 15:02:19 -- common/autotest_common.sh@829 -- # '[' -z 86763 ']' 00:15:56.027 15:02:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.027 15:02:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.027 15:02:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.027 15:02:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.027 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:56.027 [2024-11-18 15:02:19.543851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:56.027 [2024-11-18 15:02:19.543942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.286 [2024-11-18 15:02:19.686776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.286 [2024-11-18 15:02:19.769653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:56.286 [2024-11-18 15:02:19.769824] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.286 [2024-11-18 15:02:19.769842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.286 [2024-11-18 15:02:19.769854] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.286 [2024-11-18 15:02:19.770045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.286 [2024-11-18 15:02:19.770169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.286 [2024-11-18 15:02:19.772359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.286 [2024-11-18 15:02:19.772398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.222 15:02:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.222 15:02:20 -- common/autotest_common.sh@862 -- # return 0 00:15:57.222 15:02:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:57.222 15:02:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:57.222 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 15:02:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.222 15:02:20 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.222 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 [2024-11-18 15:02:20.628048] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.222 15:02:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.222 15:02:20 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:57.222 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 Malloc0 00:15:57.222 15:02:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.222 15:02:20 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:57.222 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 15:02:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.222 15:02:20 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.222 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 15:02:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.222 15:02:20 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.222 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 [2024-11-18 15:02:20.695375] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.222 test case1: single bdev can't be used in multiple subsystems 00:15:57.222 15:02:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.222 15:02:20 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:57.222 15:02:20 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:57.222 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 15:02:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.222 15:02:20 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:57.223 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.223 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.223 15:02:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.223 15:02:20 -- target/nmic.sh@28 -- # nmic_status=0 00:15:57.223 15:02:20 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:57.223 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.223 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.223 [2024-11-18 15:02:20.719209] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:57.223 [2024-11-18 15:02:20.719254] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:57.223 [2024-11-18 15:02:20.719264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.223 2024/11/18 15:02:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:57.223 request: 00:15:57.223 { 00:15:57.223 "method": "nvmf_subsystem_add_ns", 00:15:57.223 "params": { 00:15:57.223 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:57.223 "namespace": { 00:15:57.223 "bdev_name": "Malloc0" 00:15:57.223 } 00:15:57.223 } 00:15:57.223 } 00:15:57.223 Got JSON-RPC error response 00:15:57.223 GoRPCClient: error on JSON-RPC call 00:15:57.223 Adding namespace failed - expected result. 00:15:57.223 test case2: host connect to nvmf target in multiple paths 00:15:57.223 15:02:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:57.223 15:02:20 -- target/nmic.sh@29 -- # nmic_status=1 00:15:57.223 15:02:20 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:57.223 15:02:20 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:57.223 15:02:20 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:57.223 15:02:20 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:57.223 15:02:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.223 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:57.223 [2024-11-18 15:02:20.735316] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:57.223 15:02:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.223 15:02:20 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.482 15:02:20 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:57.740 15:02:21 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:57.740 15:02:21 -- common/autotest_common.sh@1187 -- # local i=0 00:15:57.740 15:02:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.740 15:02:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:57.740 15:02:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:59.644 15:02:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:59.644 15:02:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:59.644 15:02:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.644 15:02:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:59.644 15:02:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.644 15:02:23 -- common/autotest_common.sh@1197 -- # return 0 00:15:59.644 15:02:23 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:59.644 [global] 00:15:59.644 thread=1 00:15:59.644 invalidate=1 00:15:59.644 rw=write 00:15:59.644 time_based=1 00:15:59.644 runtime=1 00:15:59.644 ioengine=libaio 00:15:59.644 direct=1 00:15:59.644 bs=4096 00:15:59.644 iodepth=1 00:15:59.644 norandommap=0 00:15:59.644 numjobs=1 00:15:59.644 00:15:59.644 verify_dump=1 00:15:59.644 verify_backlog=512 00:15:59.644 verify_state_save=0 00:15:59.644 do_verify=1 00:15:59.644 verify=crc32c-intel 00:15:59.644 [job0] 00:15:59.644 filename=/dev/nvme0n1 00:15:59.644 Could not set queue depth (nvme0n1) 00:15:59.903 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:59.903 fio-3.35 00:15:59.903 Starting 1 thread 00:16:00.839 00:16:00.839 job0: (groupid=0, jobs=1): err= 0: pid=86877: Mon Nov 18 15:02:24 2024 00:16:00.839 read: IOPS=3301, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:16:00.839 slat (nsec): min=11277, max=58807, avg=14118.22, stdev=4860.27 00:16:00.839 clat (usec): min=115, max=278, avg=144.86, stdev=17.56 00:16:00.839 lat (usec): min=127, max=293, avg=158.98, stdev=18.80 00:16:00.839 clat percentiles (usec): 00:16:00.839 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:16:00.839 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:16:00.839 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 180], 00:16:00.839 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 229], 99.95th=[ 265], 00:16:00.839 | 99.99th=[ 281] 00:16:00.839 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:00.839 slat (usec): min=17, max=110, avg=22.50, stdev= 7.67 00:16:00.839 clat (usec): min=81, max=392, avg=106.56, stdev=16.94 00:16:00.839 lat (usec): min=99, max=440, avg=129.06, stdev=20.00 00:16:00.839 clat percentiles (usec): 00:16:00.839 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 94], 00:16:00.839 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 105], 00:16:00.839 | 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 130], 95.00th=[ 139], 00:16:00.839 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 314], 00:16:00.840 | 99.99th=[ 392] 00:16:00.840 bw ( KiB/s): min=14936, max=14936, per=100.00%, avg=14936.00, stdev= 0.00, samples=1 00:16:00.840 iops : min= 3734, max= 3734, avg=3734.00, stdev= 0.00, samples=1 00:16:00.840 lat (usec) : 100=22.80%, 250=77.14%, 500=0.06% 00:16:00.840 cpu : usr=3.10%, sys=8.90%, ctx=6889, majf=0, minf=5 00:16:00.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.840 issued rwts: total=3305,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.840 00:16:00.840 Run status group 0 (all jobs): 00:16:00.840 READ: bw=12.9MiB/s (13.5MB/s), 12.9MiB/s-12.9MiB/s (13.5MB/s-13.5MB/s), io=12.9MiB (13.5MB), run=1001-1001msec 00:16:00.840 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:00.840 00:16:00.840 Disk stats (read/write): 00:16:00.840 nvme0n1: ios=3122/3093, merge=0/0, ticks=483/363, in_queue=846, util=91.18% 00:16:00.840 15:02:24 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:01.099 15:02:24 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.099 15:02:24 -- common/autotest_common.sh@1208 -- # local i=0 00:16:01.099 15:02:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:01.099 15:02:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.099 15:02:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:01.099 15:02:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.099 15:02:24 -- common/autotest_common.sh@1220 -- # return 0 00:16:01.099 15:02:24 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:01.099 15:02:24 -- target/nmic.sh@53 -- # nvmftestfini 00:16:01.099 15:02:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:01.099 15:02:24 -- nvmf/common.sh@116 -- # sync 00:16:01.099 15:02:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:01.099 15:02:24 -- nvmf/common.sh@119 -- # set +e 00:16:01.099 15:02:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:01.099 15:02:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:01.099 rmmod nvme_tcp 00:16:01.099 rmmod nvme_fabrics 00:16:01.099 rmmod nvme_keyring 00:16:01.099 15:02:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:01.099 15:02:24 -- nvmf/common.sh@123 -- # set -e 00:16:01.099 15:02:24 -- nvmf/common.sh@124 -- # return 0 00:16:01.099 15:02:24 -- nvmf/common.sh@477 -- # '[' -n 86763 ']' 00:16:01.099 15:02:24 -- nvmf/common.sh@478 -- # killprocess 86763 00:16:01.099 15:02:24 -- common/autotest_common.sh@936 -- # '[' -z 86763 ']' 00:16:01.099 15:02:24 -- common/autotest_common.sh@940 -- # kill -0 86763 00:16:01.099 15:02:24 -- common/autotest_common.sh@941 -- # uname 00:16:01.099 15:02:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.099 15:02:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86763 00:16:01.099 15:02:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:01.099 15:02:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:01.099 killing process with pid 86763 00:16:01.099 15:02:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86763' 00:16:01.099 15:02:24 -- common/autotest_common.sh@955 -- # kill 86763 00:16:01.099 15:02:24 -- common/autotest_common.sh@960 -- # wait 86763 00:16:01.358 15:02:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:01.358 15:02:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:01.358 15:02:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:01.358 15:02:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.358 15:02:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:01.358 15:02:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.358 15:02:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.358 15:02:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.358 15:02:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:01.358 00:16:01.358 real 0m6.006s 00:16:01.358 user 0m20.175s 00:16:01.358 sys 0m1.373s 00:16:01.358 ************************************ 00:16:01.358 END TEST nvmf_nmic 00:16:01.358 ************************************ 00:16:01.358 15:02:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:01.358 15:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.617 15:02:24 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:01.617 15:02:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:01.617 15:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.617 15:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:01.617 ************************************ 00:16:01.617 START TEST nvmf_fio_target 00:16:01.617 ************************************ 00:16:01.617 15:02:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:01.617 * Looking for test storage... 00:16:01.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:01.617 15:02:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:01.617 15:02:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:01.617 15:02:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:01.617 15:02:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:01.617 15:02:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:01.617 15:02:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:01.617 15:02:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:01.617 15:02:25 -- scripts/common.sh@335 -- # IFS=.-: 00:16:01.617 15:02:25 -- scripts/common.sh@335 -- # read -ra ver1 00:16:01.617 15:02:25 -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.617 15:02:25 -- scripts/common.sh@336 -- # read -ra ver2 00:16:01.617 15:02:25 -- scripts/common.sh@337 -- # local 'op=<' 00:16:01.617 15:02:25 -- scripts/common.sh@339 -- # ver1_l=2 00:16:01.617 15:02:25 -- scripts/common.sh@340 -- # ver2_l=1 00:16:01.617 15:02:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:01.617 15:02:25 -- scripts/common.sh@343 -- # case "$op" in 00:16:01.617 15:02:25 -- scripts/common.sh@344 -- # : 1 00:16:01.617 15:02:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:01.617 15:02:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.617 15:02:25 -- scripts/common.sh@364 -- # decimal 1 00:16:01.617 15:02:25 -- scripts/common.sh@352 -- # local d=1 00:16:01.617 15:02:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.617 15:02:25 -- scripts/common.sh@354 -- # echo 1 00:16:01.617 15:02:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:01.617 15:02:25 -- scripts/common.sh@365 -- # decimal 2 00:16:01.617 15:02:25 -- scripts/common.sh@352 -- # local d=2 00:16:01.617 15:02:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.617 15:02:25 -- scripts/common.sh@354 -- # echo 2 00:16:01.617 15:02:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:01.617 15:02:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:01.617 15:02:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:01.617 15:02:25 -- scripts/common.sh@367 -- # return 0 00:16:01.617 15:02:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.617 15:02:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.617 --rc genhtml_branch_coverage=1 00:16:01.617 --rc genhtml_function_coverage=1 00:16:01.617 --rc genhtml_legend=1 00:16:01.617 --rc geninfo_all_blocks=1 00:16:01.617 --rc geninfo_unexecuted_blocks=1 00:16:01.617 00:16:01.617 ' 00:16:01.617 15:02:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.618 --rc genhtml_branch_coverage=1 00:16:01.618 --rc genhtml_function_coverage=1 00:16:01.618 --rc genhtml_legend=1 00:16:01.618 --rc geninfo_all_blocks=1 00:16:01.618 --rc geninfo_unexecuted_blocks=1 00:16:01.618 00:16:01.618 ' 00:16:01.618 15:02:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:01.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.618 --rc genhtml_branch_coverage=1 00:16:01.618 --rc genhtml_function_coverage=1 00:16:01.618 --rc genhtml_legend=1 00:16:01.618 --rc geninfo_all_blocks=1 00:16:01.618 --rc geninfo_unexecuted_blocks=1 00:16:01.618 00:16:01.618 ' 00:16:01.618 15:02:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:01.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.618 --rc genhtml_branch_coverage=1 00:16:01.618 --rc genhtml_function_coverage=1 00:16:01.618 --rc genhtml_legend=1 00:16:01.618 --rc geninfo_all_blocks=1 00:16:01.618 --rc geninfo_unexecuted_blocks=1 00:16:01.618 00:16:01.618 ' 00:16:01.618 15:02:25 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.618 15:02:25 -- nvmf/common.sh@7 -- # uname -s 00:16:01.618 15:02:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.618 15:02:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.618 15:02:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.618 15:02:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.618 15:02:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.618 15:02:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.618 15:02:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.618 15:02:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.618 15:02:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.618 15:02:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.618 15:02:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:16:01.618 15:02:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:16:01.618 15:02:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.618 15:02:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.618 15:02:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.618 15:02:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.618 15:02:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.618 15:02:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.618 15:02:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.618 15:02:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.618 15:02:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.618 15:02:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.618 15:02:25 -- paths/export.sh@5 -- # export PATH 00:16:01.618 15:02:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.618 15:02:25 -- nvmf/common.sh@46 -- # : 0 00:16:01.618 15:02:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:01.618 15:02:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:01.618 15:02:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:01.618 15:02:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.618 15:02:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.618 15:02:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:01.618 15:02:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:01.618 15:02:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:01.618 15:02:25 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.618 15:02:25 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:01.618 15:02:25 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.618 15:02:25 -- target/fio.sh@16 -- # nvmftestinit 00:16:01.618 15:02:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:01.618 15:02:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.618 15:02:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:01.618 15:02:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:01.618 15:02:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:01.618 15:02:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.618 15:02:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.618 15:02:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.618 15:02:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:01.618 15:02:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:01.618 15:02:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:01.618 15:02:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:01.618 15:02:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:01.618 15:02:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:01.618 15:02:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.618 15:02:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.618 15:02:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:01.618 15:02:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:01.618 15:02:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.618 15:02:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.618 15:02:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.618 15:02:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.618 15:02:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.618 15:02:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.618 15:02:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.618 15:02:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.618 15:02:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:01.618 15:02:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:01.877 Cannot find device "nvmf_tgt_br" 00:16:01.877 15:02:25 -- nvmf/common.sh@154 -- # true 00:16:01.877 15:02:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.877 Cannot find device "nvmf_tgt_br2" 00:16:01.877 15:02:25 -- nvmf/common.sh@155 -- # true 00:16:01.877 15:02:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:01.877 15:02:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:01.877 Cannot find device "nvmf_tgt_br" 00:16:01.877 15:02:25 -- nvmf/common.sh@157 -- # true 00:16:01.877 15:02:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:01.877 Cannot find device "nvmf_tgt_br2" 00:16:01.877 15:02:25 -- nvmf/common.sh@158 -- # true 00:16:01.877 15:02:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:01.877 15:02:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:01.877 15:02:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.877 15:02:25 -- nvmf/common.sh@161 -- # true 00:16:01.877 15:02:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.877 15:02:25 -- nvmf/common.sh@162 -- # true 00:16:01.877 15:02:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.877 15:02:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.877 15:02:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.877 15:02:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.877 15:02:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.877 15:02:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.877 15:02:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.878 15:02:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:01.878 15:02:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:01.878 15:02:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:01.878 15:02:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:01.878 15:02:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:01.878 15:02:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:01.878 15:02:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.878 15:02:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.878 15:02:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.878 15:02:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:01.878 15:02:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:01.878 15:02:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.878 15:02:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.878 15:02:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.878 15:02:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.878 15:02:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.136 15:02:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:02.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:02.136 00:16:02.136 --- 10.0.0.2 ping statistics --- 00:16:02.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.136 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:02.136 15:02:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:02.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:02.136 00:16:02.136 --- 10.0.0.3 ping statistics --- 00:16:02.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.136 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:02.136 15:02:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:02.136 00:16:02.136 --- 10.0.0.1 ping statistics --- 00:16:02.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.137 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:02.137 15:02:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.137 15:02:25 -- nvmf/common.sh@421 -- # return 0 00:16:02.137 15:02:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:02.137 15:02:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.137 15:02:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:02.137 15:02:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:02.137 15:02:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.137 15:02:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:02.137 15:02:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:02.137 15:02:25 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:02.137 15:02:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:02.137 15:02:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:02.137 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:02.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.137 15:02:25 -- nvmf/common.sh@469 -- # nvmfpid=87063 00:16:02.137 15:02:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:02.137 15:02:25 -- nvmf/common.sh@470 -- # waitforlisten 87063 00:16:02.137 15:02:25 -- common/autotest_common.sh@829 -- # '[' -z 87063 ']' 00:16:02.137 15:02:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.137 15:02:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.137 15:02:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.137 15:02:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.137 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:02.137 [2024-11-18 15:02:25.559417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:02.137 [2024-11-18 15:02:25.559507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.137 [2024-11-18 15:02:25.702263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.395 [2024-11-18 15:02:25.777809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:02.395 [2024-11-18 15:02:25.778212] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.395 [2024-11-18 15:02:25.778432] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.395 [2024-11-18 15:02:25.778605] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.395 [2024-11-18 15:02:25.778878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.395 [2024-11-18 15:02:25.779027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.395 [2024-11-18 15:02:25.779798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.395 [2024-11-18 15:02:25.779802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.342 15:02:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.342 15:02:26 -- common/autotest_common.sh@862 -- # return 0 00:16:03.342 15:02:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:03.342 15:02:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:03.342 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 15:02:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.342 15:02:26 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:03.342 [2024-11-18 15:02:26.877211] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.343 15:02:26 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.603 15:02:27 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:03.603 15:02:27 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.861 15:02:27 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:03.861 15:02:27 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.430 15:02:27 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:04.430 15:02:27 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.698 15:02:28 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:04.698 15:02:28 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:04.698 15:02:28 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.977 15:02:28 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:04.977 15:02:28 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.235 15:02:28 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:05.235 15:02:28 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.494 15:02:28 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:05.494 15:02:28 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:05.752 15:02:29 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:06.009 15:02:29 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:06.009 15:02:29 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:06.268 15:02:29 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:06.268 15:02:29 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:06.268 15:02:29 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.527 [2024-11-18 15:02:29.995073] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.527 15:02:30 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:06.785 15:02:30 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:07.043 15:02:30 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.301 15:02:30 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:07.301 15:02:30 -- common/autotest_common.sh@1187 -- # local i=0 00:16:07.301 15:02:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.301 15:02:30 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:07.301 15:02:30 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:07.301 15:02:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:09.201 15:02:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:09.201 15:02:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:09.201 15:02:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:09.201 15:02:32 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:09.201 15:02:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.201 15:02:32 -- common/autotest_common.sh@1197 -- # return 0 00:16:09.201 15:02:32 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:09.201 [global] 00:16:09.201 thread=1 00:16:09.201 invalidate=1 00:16:09.201 rw=write 00:16:09.201 time_based=1 00:16:09.201 runtime=1 00:16:09.201 ioengine=libaio 00:16:09.201 direct=1 00:16:09.201 bs=4096 00:16:09.201 iodepth=1 00:16:09.201 norandommap=0 00:16:09.201 numjobs=1 00:16:09.201 00:16:09.201 verify_dump=1 00:16:09.201 verify_backlog=512 00:16:09.201 verify_state_save=0 00:16:09.201 do_verify=1 00:16:09.201 verify=crc32c-intel 00:16:09.201 [job0] 00:16:09.201 filename=/dev/nvme0n1 00:16:09.201 [job1] 00:16:09.201 filename=/dev/nvme0n2 00:16:09.201 [job2] 00:16:09.201 filename=/dev/nvme0n3 00:16:09.201 [job3] 00:16:09.201 filename=/dev/nvme0n4 00:16:09.460 Could not set queue depth (nvme0n1) 00:16:09.460 Could not set queue depth (nvme0n2) 00:16:09.460 Could not set queue depth (nvme0n3) 00:16:09.460 Could not set queue depth (nvme0n4) 00:16:09.460 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.460 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.460 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.460 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.460 fio-3.35 00:16:09.460 Starting 4 threads 00:16:10.834 00:16:10.834 job0: (groupid=0, jobs=1): err= 0: pid=87353: Mon Nov 18 15:02:34 2024 00:16:10.834 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:10.834 slat (nsec): min=13015, max=70861, avg=16321.18, stdev=5273.18 00:16:10.834 clat (usec): min=134, max=722, avg=180.53, stdev=31.19 00:16:10.834 lat (usec): min=148, max=735, avg=196.85, stdev=32.00 00:16:10.834 clat percentiles (usec): 00:16:10.834 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:16:10.834 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 182], 00:16:10.834 | 70.00th=[ 192], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 235], 00:16:10.834 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 318], 99.95th=[ 404], 00:16:10.834 | 99.99th=[ 725] 00:16:10.834 write: IOPS=2881, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:16:10.834 slat (usec): min=20, max=132, avg=26.13, stdev= 7.73 00:16:10.834 clat (usec): min=93, max=495, avg=142.56, stdev=31.46 00:16:10.834 lat (usec): min=114, max=517, avg=168.69, stdev=34.53 00:16:10.834 clat percentiles (usec): 00:16:10.834 | 1.00th=[ 102], 5.00th=[ 108], 10.00th=[ 112], 20.00th=[ 117], 00:16:10.834 | 30.00th=[ 122], 40.00th=[ 128], 50.00th=[ 135], 60.00th=[ 143], 00:16:10.834 | 70.00th=[ 153], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 202], 00:16:10.834 | 99.00th=[ 233], 99.50th=[ 255], 99.90th=[ 285], 99.95th=[ 371], 00:16:10.834 | 99.99th=[ 494] 00:16:10.834 bw ( KiB/s): min=11416, max=11416, per=36.30%, avg=11416.00, stdev= 0.00, samples=1 00:16:10.834 iops : min= 2854, max= 2854, avg=2854.00, stdev= 0.00, samples=1 00:16:10.834 lat (usec) : 100=0.33%, 250=98.38%, 500=1.27%, 750=0.02% 00:16:10.834 cpu : usr=1.70%, sys=8.60%, ctx=5445, majf=0, minf=15 00:16:10.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.834 issued rwts: total=2560,2884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.834 job1: (groupid=0, jobs=1): err= 0: pid=87358: Mon Nov 18 15:02:34 2024 00:16:10.834 read: IOPS=1448, BW=5794KiB/s (5933kB/s)(5800KiB/1001msec) 00:16:10.834 slat (nsec): min=10900, max=70617, avg=22516.74, stdev=7788.40 00:16:10.834 clat (usec): min=191, max=640, avg=361.58, stdev=61.25 00:16:10.834 lat (usec): min=210, max=661, avg=384.10, stdev=61.55 00:16:10.834 clat percentiles (usec): 00:16:10.834 | 1.00th=[ 262], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 00:16:10.834 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 359], 00:16:10.834 | 70.00th=[ 379], 80.00th=[ 420], 90.00th=[ 449], 95.00th=[ 478], 00:16:10.834 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 586], 99.95th=[ 644], 00:16:10.834 | 99.99th=[ 644] 00:16:10.834 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:10.834 slat (nsec): min=14473, max=92737, avg=33112.70, stdev=10167.45 00:16:10.834 clat (usec): min=111, max=468, avg=250.28, stdev=42.71 00:16:10.834 lat (usec): min=133, max=506, avg=283.39, stdev=45.63 00:16:10.834 clat percentiles (usec): 00:16:10.834 | 1.00th=[ 165], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 219], 00:16:10.834 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 255], 00:16:10.834 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 326], 00:16:10.834 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 461], 99.95th=[ 469], 00:16:10.834 | 99.99th=[ 469] 00:16:10.834 bw ( KiB/s): min= 8192, max= 8192, per=26.05%, avg=8192.00, stdev= 0.00, samples=1 00:16:10.834 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:10.834 lat (usec) : 250=28.83%, 500=69.79%, 750=1.37% 00:16:10.834 cpu : usr=1.70%, sys=6.40%, ctx=2987, majf=0, minf=6 00:16:10.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.834 issued rwts: total=1450,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.834 job2: (groupid=0, jobs=1): err= 0: pid=87359: Mon Nov 18 15:02:34 2024 00:16:10.834 read: IOPS=1423, BW=5694KiB/s (5831kB/s)(5700KiB/1001msec) 00:16:10.835 slat (usec): min=16, max=102, avg=25.84, stdev= 8.71 00:16:10.835 clat (usec): min=234, max=2550, avg=361.82, stdev=74.14 00:16:10.835 lat (usec): min=286, max=2586, avg=387.66, stdev=74.86 00:16:10.835 clat percentiles (usec): 00:16:10.835 | 1.00th=[ 273], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:16:10.835 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 363], 00:16:10.835 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 445], 00:16:10.835 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[ 635], 99.95th=[ 2540], 00:16:10.835 | 99.99th=[ 2540] 00:16:10.835 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:10.835 slat (usec): min=25, max=131, avg=37.14, stdev= 9.62 00:16:10.835 clat (usec): min=133, max=2782, avg=248.77, stdev=81.14 00:16:10.835 lat (usec): min=168, max=2816, avg=285.91, stdev=81.83 00:16:10.835 clat percentiles (usec): 00:16:10.835 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 208], 00:16:10.835 | 30.00th=[ 223], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 253], 00:16:10.835 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 343], 00:16:10.835 | 99.00th=[ 408], 99.50th=[ 441], 99.90th=[ 465], 99.95th=[ 2769], 00:16:10.835 | 99.99th=[ 2769] 00:16:10.835 bw ( KiB/s): min= 8128, max= 8128, per=25.85%, avg=8128.00, stdev= 0.00, samples=1 00:16:10.835 iops : min= 2032, max= 2032, avg=2032.00, stdev= 0.00, samples=1 00:16:10.835 lat (usec) : 250=29.69%, 500=69.98%, 750=0.27% 00:16:10.835 lat (msec) : 4=0.07% 00:16:10.835 cpu : usr=1.40%, sys=7.30%, ctx=2962, majf=0, minf=15 00:16:10.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.835 issued rwts: total=1425,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.835 job3: (groupid=0, jobs=1): err= 0: pid=87360: Mon Nov 18 15:02:34 2024 00:16:10.835 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:10.835 slat (nsec): min=12573, max=65451, avg=19151.91, stdev=5111.54 00:16:10.835 clat (usec): min=190, max=7430, avg=311.07, stdev=214.42 00:16:10.835 lat (usec): min=207, max=7445, avg=330.22, stdev=214.82 00:16:10.835 clat percentiles (usec): 00:16:10.835 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 229], 00:16:10.835 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 265], 60.00th=[ 285], 00:16:10.835 | 70.00th=[ 330], 80.00th=[ 371], 90.00th=[ 478], 95.00th=[ 506], 00:16:10.835 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 2057], 99.95th=[ 7439], 00:16:10.835 | 99.99th=[ 7439] 00:16:10.835 write: IOPS=1912, BW=7648KiB/s (7832kB/s)(7656KiB/1001msec); 0 zone resets 00:16:10.835 slat (usec): min=11, max=208, avg=28.54, stdev= 9.28 00:16:10.835 clat (usec): min=126, max=449, avg=225.33, stdev=53.48 00:16:10.835 lat (usec): min=144, max=495, avg=253.86, stdev=53.36 00:16:10.835 clat percentiles (usec): 00:16:10.835 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 180], 00:16:10.835 | 30.00th=[ 190], 40.00th=[ 200], 50.00th=[ 212], 60.00th=[ 227], 00:16:10.835 | 70.00th=[ 243], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 330], 00:16:10.835 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 429], 99.95th=[ 449], 00:16:10.835 | 99.99th=[ 449] 00:16:10.835 bw ( KiB/s): min= 8192, max= 8192, per=26.05%, avg=8192.00, stdev= 0.00, samples=1 00:16:10.835 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:10.835 lat (usec) : 250=58.81%, 500=38.61%, 750=2.46% 00:16:10.835 lat (msec) : 2=0.06%, 4=0.03%, 10=0.03% 00:16:10.835 cpu : usr=1.70%, sys=6.30%, ctx=3452, majf=0, minf=3 00:16:10.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.835 issued rwts: total=1536,1914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.835 00:16:10.835 Run status group 0 (all jobs): 00:16:10.835 READ: bw=27.2MiB/s (28.5MB/s), 5694KiB/s-9.99MiB/s (5831kB/s-10.5MB/s), io=27.2MiB (28.6MB), run=1001-1001msec 00:16:10.835 WRITE: bw=30.7MiB/s (32.2MB/s), 6138KiB/s-11.3MiB/s (6285kB/s-11.8MB/s), io=30.7MiB (32.2MB), run=1001-1001msec 00:16:10.835 00:16:10.835 Disk stats (read/write): 00:16:10.835 nvme0n1: ios=2110/2560, merge=0/0, ticks=434/406, in_queue=840, util=88.58% 00:16:10.835 nvme0n2: ios=1144/1536, merge=0/0, ticks=421/395, in_queue=816, util=88.54% 00:16:10.835 nvme0n3: ios=1082/1536, merge=0/0, ticks=396/400, in_queue=796, util=89.13% 00:16:10.835 nvme0n4: ios=1442/1536, merge=0/0, ticks=442/347, in_queue=789, util=89.27% 00:16:10.835 15:02:34 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:10.835 [global] 00:16:10.835 thread=1 00:16:10.835 invalidate=1 00:16:10.835 rw=randwrite 00:16:10.835 time_based=1 00:16:10.835 runtime=1 00:16:10.835 ioengine=libaio 00:16:10.835 direct=1 00:16:10.835 bs=4096 00:16:10.835 iodepth=1 00:16:10.835 norandommap=0 00:16:10.835 numjobs=1 00:16:10.835 00:16:10.835 verify_dump=1 00:16:10.835 verify_backlog=512 00:16:10.835 verify_state_save=0 00:16:10.835 do_verify=1 00:16:10.835 verify=crc32c-intel 00:16:10.835 [job0] 00:16:10.835 filename=/dev/nvme0n1 00:16:10.835 [job1] 00:16:10.835 filename=/dev/nvme0n2 00:16:10.835 [job2] 00:16:10.835 filename=/dev/nvme0n3 00:16:10.835 [job3] 00:16:10.835 filename=/dev/nvme0n4 00:16:10.835 Could not set queue depth (nvme0n1) 00:16:10.835 Could not set queue depth (nvme0n2) 00:16:10.835 Could not set queue depth (nvme0n3) 00:16:10.835 Could not set queue depth (nvme0n4) 00:16:10.835 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.835 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.835 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.835 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.835 fio-3.35 00:16:10.835 Starting 4 threads 00:16:12.210 00:16:12.210 job0: (groupid=0, jobs=1): err= 0: pid=87413: Mon Nov 18 15:02:35 2024 00:16:12.210 read: IOPS=1032, BW=4132KiB/s (4231kB/s)(4136KiB/1001msec) 00:16:12.210 slat (usec): min=15, max=121, avg=22.87, stdev= 7.94 00:16:12.210 clat (usec): min=261, max=960, avg=409.02, stdev=54.31 00:16:12.210 lat (usec): min=328, max=994, avg=431.89, stdev=54.73 00:16:12.210 clat percentiles (usec): 00:16:12.210 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 371], 00:16:12.210 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 408], 00:16:12.210 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 461], 95.00th=[ 490], 00:16:12.210 | 99.00th=[ 627], 99.50th=[ 685], 99.90th=[ 791], 99.95th=[ 963], 00:16:12.210 | 99.99th=[ 963] 00:16:12.210 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:12.210 slat (usec): min=24, max=101, avg=42.40, stdev= 8.90 00:16:12.210 clat (usec): min=140, max=2378, avg=312.73, stdev=80.06 00:16:12.210 lat (usec): min=168, max=2414, avg=355.14, stdev=79.40 00:16:12.210 clat percentiles (usec): 00:16:12.210 | 1.00th=[ 202], 5.00th=[ 233], 10.00th=[ 249], 20.00th=[ 265], 00:16:12.210 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 310], 00:16:12.210 | 70.00th=[ 334], 80.00th=[ 375], 90.00th=[ 396], 95.00th=[ 412], 00:16:12.210 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 979], 99.95th=[ 2376], 00:16:12.210 | 99.99th=[ 2376] 00:16:12.210 bw ( KiB/s): min= 6080, max= 6080, per=19.18%, avg=6080.00, stdev= 0.00, samples=1 00:16:12.210 iops : min= 1520, max= 1520, avg=1520.00, stdev= 0.00, samples=1 00:16:12.210 lat (usec) : 250=6.11%, 500=92.26%, 750=1.44%, 1000=0.16% 00:16:12.210 lat (msec) : 4=0.04% 00:16:12.210 cpu : usr=2.30%, sys=6.10%, ctx=2575, majf=0, minf=11 00:16:12.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.210 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.210 job1: (groupid=0, jobs=1): err= 0: pid=87414: Mon Nov 18 15:02:35 2024 00:16:12.210 read: IOPS=2236, BW=8947KiB/s (9162kB/s)(8956KiB/1001msec) 00:16:12.210 slat (nsec): min=12846, max=67356, avg=16731.42, stdev=5090.85 00:16:12.210 clat (usec): min=150, max=327, avg=200.88, stdev=23.46 00:16:12.210 lat (usec): min=167, max=356, avg=217.61, stdev=24.35 00:16:12.210 clat percentiles (usec): 00:16:12.210 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:16:12.210 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:16:12.210 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 245], 00:16:12.210 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 322], 00:16:12.210 | 99.99th=[ 326] 00:16:12.211 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:12.211 slat (usec): min=19, max=100, avg=26.08, stdev= 7.50 00:16:12.211 clat (usec): min=91, max=1606, avg=170.58, stdev=36.84 00:16:12.211 lat (usec): min=112, max=1627, avg=196.66, stdev=38.10 00:16:12.211 clat percentiles (usec): 00:16:12.211 | 1.00th=[ 127], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:16:12.211 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:16:12.211 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 212], 00:16:12.211 | 99.00th=[ 241], 99.50th=[ 255], 99.90th=[ 302], 99.95th=[ 302], 00:16:12.211 | 99.99th=[ 1614] 00:16:12.211 bw ( KiB/s): min=10696, max=10696, per=33.74%, avg=10696.00, stdev= 0.00, samples=1 00:16:12.211 iops : min= 2674, max= 2674, avg=2674.00, stdev= 0.00, samples=1 00:16:12.211 lat (usec) : 100=0.06%, 250=97.85%, 500=2.06% 00:16:12.211 lat (msec) : 2=0.02% 00:16:12.211 cpu : usr=1.90%, sys=7.50%, ctx=4800, majf=0, minf=16 00:16:12.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.211 issued rwts: total=2239,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.211 job2: (groupid=0, jobs=1): err= 0: pid=87415: Mon Nov 18 15:02:35 2024 00:16:12.211 read: IOPS=1040, BW=4164KiB/s (4264kB/s)(4168KiB/1001msec) 00:16:12.211 slat (usec): min=16, max=126, avg=32.04, stdev= 9.94 00:16:12.211 clat (usec): min=235, max=831, avg=395.60, stdev=53.37 00:16:12.211 lat (usec): min=270, max=860, avg=427.63, stdev=52.56 00:16:12.211 clat percentiles (usec): 00:16:12.211 | 1.00th=[ 297], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 359], 00:16:12.211 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 396], 00:16:12.211 | 70.00th=[ 412], 80.00th=[ 433], 90.00th=[ 457], 95.00th=[ 478], 00:16:12.211 | 99.00th=[ 594], 99.50th=[ 652], 99.90th=[ 742], 99.95th=[ 832], 00:16:12.211 | 99.99th=[ 832] 00:16:12.211 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:12.211 slat (usec): min=27, max=119, avg=43.44, stdev= 9.52 00:16:12.211 clat (usec): min=156, max=2077, avg=311.32, stdev=74.07 00:16:12.211 lat (usec): min=190, max=2124, avg=354.77, stdev=73.31 00:16:12.211 clat percentiles (usec): 00:16:12.211 | 1.00th=[ 206], 5.00th=[ 235], 10.00th=[ 249], 20.00th=[ 265], 00:16:12.211 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:16:12.211 | 70.00th=[ 330], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 412], 00:16:12.211 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 988], 99.95th=[ 2073], 00:16:12.211 | 99.99th=[ 2073] 00:16:12.211 bw ( KiB/s): min= 6104, max= 6104, per=19.25%, avg=6104.00, stdev= 0.00, samples=1 00:16:12.211 iops : min= 1526, max= 1526, avg=1526.00, stdev= 0.00, samples=1 00:16:12.211 lat (usec) : 250=6.09%, 500=92.59%, 750=1.20%, 1000=0.08% 00:16:12.211 lat (msec) : 4=0.04% 00:16:12.211 cpu : usr=1.70%, sys=7.90%, ctx=2579, majf=0, minf=13 00:16:12.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.211 issued rwts: total=1042,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.211 job3: (groupid=0, jobs=1): err= 0: pid=87416: Mon Nov 18 15:02:35 2024 00:16:12.211 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:12.211 slat (nsec): min=12008, max=57441, avg=16069.14, stdev=4972.83 00:16:12.211 clat (usec): min=160, max=588, avg=229.47, stdev=32.62 00:16:12.211 lat (usec): min=173, max=605, avg=245.54, stdev=33.70 00:16:12.211 clat percentiles (usec): 00:16:12.211 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 202], 00:16:12.211 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:16:12.211 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 289], 00:16:12.211 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 375], 99.95th=[ 379], 00:16:12.211 | 99.99th=[ 586] 00:16:12.211 write: IOPS=2299, BW=9199KiB/s (9420kB/s)(9208KiB/1001msec); 0 zone resets 00:16:12.211 slat (usec): min=18, max=103, avg=24.97, stdev= 7.42 00:16:12.211 clat (usec): min=126, max=2590, avg=187.60, stdev=59.40 00:16:12.211 lat (usec): min=146, max=2610, avg=212.57, stdev=59.83 00:16:12.211 clat percentiles (usec): 00:16:12.211 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:16:12.211 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 190], 00:16:12.211 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 241], 00:16:12.211 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 570], 99.95th=[ 586], 00:16:12.211 | 99.99th=[ 2606] 00:16:12.211 bw ( KiB/s): min= 9096, max= 9096, per=28.69%, avg=9096.00, stdev= 0.00, samples=1 00:16:12.211 iops : min= 2274, max= 2274, avg=2274.00, stdev= 0.00, samples=1 00:16:12.211 lat (usec) : 250=87.06%, 500=12.85%, 750=0.07% 00:16:12.211 lat (msec) : 4=0.02% 00:16:12.211 cpu : usr=1.20%, sys=7.40%, ctx=4357, majf=0, minf=7 00:16:12.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.211 issued rwts: total=2048,2302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.211 00:16:12.211 Run status group 0 (all jobs): 00:16:12.211 READ: bw=24.8MiB/s (26.0MB/s), 4132KiB/s-8947KiB/s (4231kB/s-9162kB/s), io=24.9MiB (26.1MB), run=1001-1001msec 00:16:12.211 WRITE: bw=31.0MiB/s (32.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.0MiB (32.5MB), run=1001-1001msec 00:16:12.211 00:16:12.211 Disk stats (read/write): 00:16:12.211 nvme0n1: ios=1073/1155, merge=0/0, ticks=456/375, in_queue=831, util=88.77% 00:16:12.211 nvme0n2: ios=2080/2056, merge=0/0, ticks=443/379, in_queue=822, util=88.40% 00:16:12.211 nvme0n3: ios=1024/1162, merge=0/0, ticks=419/375, in_queue=794, util=89.20% 00:16:12.211 nvme0n4: ios=1692/2048, merge=0/0, ticks=393/412, in_queue=805, util=89.76% 00:16:12.211 15:02:35 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:12.211 [global] 00:16:12.211 thread=1 00:16:12.211 invalidate=1 00:16:12.211 rw=write 00:16:12.211 time_based=1 00:16:12.211 runtime=1 00:16:12.211 ioengine=libaio 00:16:12.211 direct=1 00:16:12.211 bs=4096 00:16:12.211 iodepth=128 00:16:12.211 norandommap=0 00:16:12.211 numjobs=1 00:16:12.211 00:16:12.211 verify_dump=1 00:16:12.211 verify_backlog=512 00:16:12.211 verify_state_save=0 00:16:12.211 do_verify=1 00:16:12.211 verify=crc32c-intel 00:16:12.211 [job0] 00:16:12.211 filename=/dev/nvme0n1 00:16:12.211 [job1] 00:16:12.211 filename=/dev/nvme0n2 00:16:12.211 [job2] 00:16:12.211 filename=/dev/nvme0n3 00:16:12.211 [job3] 00:16:12.211 filename=/dev/nvme0n4 00:16:12.211 Could not set queue depth (nvme0n1) 00:16:12.211 Could not set queue depth (nvme0n2) 00:16:12.211 Could not set queue depth (nvme0n3) 00:16:12.211 Could not set queue depth (nvme0n4) 00:16:12.211 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.211 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.211 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.211 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.211 fio-3.35 00:16:12.211 Starting 4 threads 00:16:13.588 00:16:13.588 job0: (groupid=0, jobs=1): err= 0: pid=87471: Mon Nov 18 15:02:36 2024 00:16:13.588 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:16:13.588 slat (usec): min=7, max=7285, avg=268.55, stdev=1066.07 00:16:13.588 clat (usec): min=18120, max=49865, avg=34437.39, stdev=7705.57 00:16:13.588 lat (usec): min=18506, max=49891, avg=34705.95, stdev=7690.88 00:16:13.588 clat percentiles (usec): 00:16:13.588 | 1.00th=[22152], 5.00th=[24511], 10.00th=[26084], 20.00th=[26608], 00:16:13.588 | 30.00th=[28705], 40.00th=[30802], 50.00th=[33424], 60.00th=[35390], 00:16:13.588 | 70.00th=[37487], 80.00th=[42206], 90.00th=[46400], 95.00th=[48497], 00:16:13.588 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:16:13.588 | 99.99th=[50070] 00:16:13.588 write: IOPS=2123, BW=8494KiB/s (8698kB/s)(8528KiB/1004msec); 0 zone resets 00:16:13.588 slat (usec): min=15, max=7177, avg=202.84, stdev=987.96 00:16:13.588 clat (usec): min=975, max=34074, avg=25940.92, stdev=4993.74 00:16:13.588 lat (usec): min=4841, max=34123, avg=26143.76, stdev=4927.37 00:16:13.588 clat percentiles (usec): 00:16:13.588 | 1.00th=[ 5473], 5.00th=[18744], 10.00th=[20317], 20.00th=[22676], 00:16:13.588 | 30.00th=[23725], 40.00th=[25297], 50.00th=[26608], 60.00th=[28443], 00:16:13.588 | 70.00th=[28705], 80.00th=[30278], 90.00th=[31851], 95.00th=[31851], 00:16:13.588 | 99.00th=[32900], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:16:13.588 | 99.99th=[33817] 00:16:13.588 bw ( KiB/s): min= 8015, max= 8368, per=19.93%, avg=8191.50, stdev=249.61, samples=2 00:16:13.588 iops : min= 2003, max= 2092, avg=2047.50, stdev=62.93, samples=2 00:16:13.588 lat (usec) : 1000=0.02% 00:16:13.588 lat (msec) : 10=0.77%, 20=3.18%, 50=96.03% 00:16:13.588 cpu : usr=2.29%, sys=6.68%, ctx=205, majf=0, minf=5 00:16:13.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:13.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.588 issued rwts: total=2048,2132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.588 job1: (groupid=0, jobs=1): err= 0: pid=87472: Mon Nov 18 15:02:36 2024 00:16:13.588 read: IOPS=2384, BW=9538KiB/s (9767kB/s)(9576KiB/1004msec) 00:16:13.588 slat (usec): min=2, max=7493, avg=194.95, stdev=868.09 00:16:13.588 clat (usec): min=1254, max=31191, avg=24856.38, stdev=3466.40 00:16:13.588 lat (usec): min=4459, max=34259, avg=25051.33, stdev=3398.04 00:16:13.588 clat percentiles (usec): 00:16:13.588 | 1.00th=[ 6980], 5.00th=[20317], 10.00th=[21365], 20.00th=[22938], 00:16:13.588 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:16:13.588 | 70.00th=[26608], 80.00th=[27395], 90.00th=[27919], 95.00th=[28443], 00:16:13.588 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:16:13.588 | 99.99th=[31065] 00:16:13.588 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:16:13.588 slat (usec): min=6, max=7660, avg=201.97, stdev=976.96 00:16:13.588 clat (usec): min=17824, max=32307, avg=25952.54, stdev=2957.46 00:16:13.589 lat (usec): min=17851, max=32330, avg=26154.51, stdev=2874.85 00:16:13.589 clat percentiles (usec): 00:16:13.589 | 1.00th=[19530], 5.00th=[20579], 10.00th=[21365], 20.00th=[23200], 00:16:13.589 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:16:13.589 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29754], 95.00th=[30540], 00:16:13.589 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:16:13.589 | 99.99th=[32375] 00:16:13.589 bw ( KiB/s): min=10083, max=10396, per=24.92%, avg=10239.50, stdev=221.32, samples=2 00:16:13.589 iops : min= 2520, max= 2599, avg=2559.50, stdev=55.86, samples=2 00:16:13.589 lat (msec) : 2=0.02%, 10=0.65%, 20=2.38%, 50=96.95% 00:16:13.589 cpu : usr=2.49%, sys=6.48%, ctx=484, majf=0, minf=16 00:16:13.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:13.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.589 issued rwts: total=2394,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.589 job2: (groupid=0, jobs=1): err= 0: pid=87476: Mon Nov 18 15:02:36 2024 00:16:13.589 read: IOPS=2376, BW=9507KiB/s (9736kB/s)(9536KiB/1003msec) 00:16:13.589 slat (usec): min=2, max=7919, avg=197.29, stdev=909.93 00:16:13.589 clat (usec): min=1018, max=35439, avg=25423.94, stdev=3808.90 00:16:13.589 lat (usec): min=4333, max=35461, avg=25621.23, stdev=3733.86 00:16:13.589 clat percentiles (usec): 00:16:13.589 | 1.00th=[ 6259], 5.00th=[20579], 10.00th=[21890], 20.00th=[24249], 00:16:13.589 | 30.00th=[24773], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:16:13.589 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:16:13.589 | 99.00th=[33424], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:16:13.589 | 99.99th=[35390] 00:16:13.589 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:16:13.589 slat (usec): min=9, max=7138, avg=200.18, stdev=956.87 00:16:13.589 clat (usec): min=17171, max=32411, avg=25517.01, stdev=2880.32 00:16:13.589 lat (usec): min=19179, max=32541, avg=25717.19, stdev=2810.78 00:16:13.589 clat percentiles (usec): 00:16:13.589 | 1.00th=[19530], 5.00th=[20317], 10.00th=[21365], 20.00th=[22676], 00:16:13.589 | 30.00th=[23725], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:16:13.589 | 70.00th=[27132], 80.00th=[27919], 90.00th=[29230], 95.00th=[30278], 00:16:13.589 | 99.00th=[31851], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:16:13.589 | 99.99th=[32375] 00:16:13.589 bw ( KiB/s): min=10027, max=10432, per=24.89%, avg=10229.50, stdev=286.38, samples=2 00:16:13.589 iops : min= 2506, max= 2608, avg=2557.00, stdev=72.12, samples=2 00:16:13.589 lat (msec) : 2=0.02%, 10=0.65%, 20=2.77%, 50=96.56% 00:16:13.589 cpu : usr=2.10%, sys=7.39%, ctx=419, majf=0, minf=9 00:16:13.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:13.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.589 issued rwts: total=2384,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.589 job3: (groupid=0, jobs=1): err= 0: pid=87478: Mon Nov 18 15:02:36 2024 00:16:13.589 read: IOPS=2657, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1005msec) 00:16:13.589 slat (usec): min=6, max=10390, avg=179.50, stdev=900.41 00:16:13.589 clat (usec): min=3813, max=40316, avg=22837.97, stdev=3926.67 00:16:13.589 lat (usec): min=3828, max=40356, avg=23017.47, stdev=3980.50 00:16:13.589 clat percentiles (usec): 00:16:13.589 | 1.00th=[ 9241], 5.00th=[17171], 10.00th=[18220], 20.00th=[20317], 00:16:13.589 | 30.00th=[21103], 40.00th=[21627], 50.00th=[22414], 60.00th=[23462], 00:16:13.589 | 70.00th=[24773], 80.00th=[25822], 90.00th=[28443], 95.00th=[29754], 00:16:13.589 | 99.00th=[30802], 99.50th=[31327], 99.90th=[35914], 99.95th=[36963], 00:16:13.589 | 99.99th=[40109] 00:16:13.589 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:16:13.589 slat (usec): min=14, max=7249, avg=160.96, stdev=768.52 00:16:13.589 clat (usec): min=13144, max=36555, avg=21367.99, stdev=3656.05 00:16:13.589 lat (usec): min=13170, max=36591, avg=21528.95, stdev=3730.43 00:16:13.589 clat percentiles (usec): 00:16:13.589 | 1.00th=[15270], 5.00th=[16909], 10.00th=[17171], 20.00th=[17957], 00:16:13.589 | 30.00th=[18482], 40.00th=[19530], 50.00th=[20841], 60.00th=[21890], 00:16:13.589 | 70.00th=[23725], 80.00th=[24773], 90.00th=[26608], 95.00th=[27132], 00:16:13.589 | 99.00th=[29754], 99.50th=[30016], 99.90th=[31851], 99.95th=[36439], 00:16:13.589 | 99.99th=[36439] 00:16:13.589 bw ( KiB/s): min=12135, max=12288, per=29.72%, avg=12211.50, stdev=108.19, samples=2 00:16:13.589 iops : min= 3033, max= 3072, avg=3052.50, stdev=27.58, samples=2 00:16:13.589 lat (msec) : 4=0.09%, 10=0.38%, 20=30.35%, 50=69.18% 00:16:13.589 cpu : usr=3.29%, sys=8.86%, ctx=253, majf=0, minf=9 00:16:13.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:13.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.589 issued rwts: total=2671,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.589 00:16:13.589 Run status group 0 (all jobs): 00:16:13.589 READ: bw=36.9MiB/s (38.7MB/s), 8159KiB/s-10.4MiB/s (8355kB/s-10.9MB/s), io=37.1MiB (38.9MB), run=1003-1005msec 00:16:13.589 WRITE: bw=40.1MiB/s (42.1MB/s), 8494KiB/s-11.9MiB/s (8698kB/s-12.5MB/s), io=40.3MiB (42.3MB), run=1003-1005msec 00:16:13.589 00:16:13.589 Disk stats (read/write): 00:16:13.589 nvme0n1: ios=1682/2048, merge=0/0, ticks=13564/12167, in_queue=25731, util=88.68% 00:16:13.589 nvme0n2: ios=2097/2232, merge=0/0, ticks=11996/12149, in_queue=24145, util=89.87% 00:16:13.589 nvme0n3: ios=2048/2200, merge=0/0, ticks=12313/12055, in_queue=24368, util=89.06% 00:16:13.589 nvme0n4: ios=2290/2560, merge=0/0, ticks=17173/16350, in_queue=33523, util=89.62% 00:16:13.589 15:02:36 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:13.589 [global] 00:16:13.589 thread=1 00:16:13.589 invalidate=1 00:16:13.589 rw=randwrite 00:16:13.589 time_based=1 00:16:13.589 runtime=1 00:16:13.589 ioengine=libaio 00:16:13.589 direct=1 00:16:13.589 bs=4096 00:16:13.589 iodepth=128 00:16:13.589 norandommap=0 00:16:13.589 numjobs=1 00:16:13.589 00:16:13.589 verify_dump=1 00:16:13.589 verify_backlog=512 00:16:13.589 verify_state_save=0 00:16:13.589 do_verify=1 00:16:13.589 verify=crc32c-intel 00:16:13.589 [job0] 00:16:13.589 filename=/dev/nvme0n1 00:16:13.589 [job1] 00:16:13.589 filename=/dev/nvme0n2 00:16:13.589 [job2] 00:16:13.589 filename=/dev/nvme0n3 00:16:13.589 [job3] 00:16:13.589 filename=/dev/nvme0n4 00:16:13.589 Could not set queue depth (nvme0n1) 00:16:13.589 Could not set queue depth (nvme0n2) 00:16:13.589 Could not set queue depth (nvme0n3) 00:16:13.589 Could not set queue depth (nvme0n4) 00:16:13.589 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.589 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.589 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.589 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.589 fio-3.35 00:16:13.589 Starting 4 threads 00:16:14.965 00:16:14.965 job0: (groupid=0, jobs=1): err= 0: pid=87539: Mon Nov 18 15:02:38 2024 00:16:14.965 read: IOPS=4137, BW=16.2MiB/s (16.9MB/s)(16.2MiB/1003msec) 00:16:14.965 slat (usec): min=3, max=3944, avg=106.97, stdev=493.03 00:16:14.965 clat (usec): min=467, max=17674, avg=14166.05, stdev=1529.78 00:16:14.965 lat (usec): min=3566, max=17718, avg=14273.02, stdev=1461.78 00:16:14.965 clat percentiles (usec): 00:16:14.965 | 1.00th=[ 7242], 5.00th=[11731], 10.00th=[12387], 20.00th=[13566], 00:16:14.965 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:16:14.965 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15795], 00:16:14.965 | 99.00th=[16712], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:16:14.965 | 99.99th=[17695] 00:16:14.965 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:16:14.965 slat (usec): min=13, max=4313, avg=113.25, stdev=488.90 00:16:14.965 clat (usec): min=7431, max=20297, avg=14707.78, stdev=1759.91 00:16:14.965 lat (usec): min=7451, max=20321, avg=14821.03, stdev=1747.65 00:16:14.965 clat percentiles (usec): 00:16:14.965 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12256], 20.00th=[12911], 00:16:14.965 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15008], 60.00th=[15401], 00:16:14.965 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16712], 95.00th=[17433], 00:16:14.965 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20317], 99.95th=[20317], 00:16:14.965 | 99.99th=[20317] 00:16:14.965 bw ( KiB/s): min=18112, max=18160, per=35.75%, avg=18136.00, stdev=33.94, samples=2 00:16:14.965 iops : min= 4528, max= 4540, avg=4534.00, stdev= 8.49, samples=2 00:16:14.965 lat (usec) : 500=0.01% 00:16:14.965 lat (msec) : 4=0.30%, 10=0.48%, 20=98.98%, 50=0.23% 00:16:14.965 cpu : usr=4.39%, sys=12.38%, ctx=599, majf=0, minf=5 00:16:14.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:14.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.965 issued rwts: total=4150,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.965 job1: (groupid=0, jobs=1): err= 0: pid=87540: Mon Nov 18 15:02:38 2024 00:16:14.965 read: IOPS=2008, BW=8036KiB/s (8229kB/s)(8084KiB/1006msec) 00:16:14.965 slat (usec): min=6, max=14106, avg=268.24, stdev=1343.21 00:16:14.965 clat (usec): min=4789, max=51300, avg=32294.45, stdev=6059.48 00:16:14.965 lat (usec): min=4804, max=51337, avg=32562.70, stdev=6171.61 00:16:14.965 clat percentiles (usec): 00:16:14.965 | 1.00th=[17433], 5.00th=[24773], 10.00th=[25297], 20.00th=[27919], 00:16:14.966 | 30.00th=[29230], 40.00th=[30278], 50.00th=[31327], 60.00th=[32637], 00:16:14.966 | 70.00th=[34866], 80.00th=[39060], 90.00th=[41157], 95.00th=[42730], 00:16:14.966 | 99.00th=[44303], 99.50th=[46400], 99.90th=[50070], 99.95th=[50594], 00:16:14.966 | 99.99th=[51119] 00:16:14.966 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:16:14.966 slat (usec): min=14, max=9635, avg=214.12, stdev=1039.94 00:16:14.966 clat (usec): min=16118, max=56760, avg=29796.37, stdev=7272.09 00:16:14.966 lat (usec): min=16146, max=56822, avg=30010.50, stdev=7329.16 00:16:14.966 clat percentiles (usec): 00:16:14.966 | 1.00th=[19006], 5.00th=[21627], 10.00th=[21890], 20.00th=[22938], 00:16:14.966 | 30.00th=[24773], 40.00th=[26084], 50.00th=[29230], 60.00th=[30016], 00:16:14.966 | 70.00th=[32113], 80.00th=[35914], 90.00th=[38536], 95.00th=[43779], 00:16:14.966 | 99.00th=[53740], 99.50th=[54789], 99.90th=[56886], 99.95th=[56886], 00:16:14.966 | 99.99th=[56886] 00:16:14.966 bw ( KiB/s): min= 8192, max= 8208, per=16.16%, avg=8200.00, stdev=11.31, samples=2 00:16:14.966 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:16:14.966 lat (msec) : 10=0.07%, 20=2.26%, 50=96.73%, 100=0.93% 00:16:14.966 cpu : usr=2.19%, sys=6.77%, ctx=192, majf=0, minf=15 00:16:14.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:14.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.966 issued rwts: total=2021,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.966 job2: (groupid=0, jobs=1): err= 0: pid=87541: Mon Nov 18 15:02:38 2024 00:16:14.966 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:16:14.966 slat (usec): min=7, max=4308, avg=125.18, stdev=579.67 00:16:14.966 clat (usec): min=11413, max=21071, avg=16473.33, stdev=1640.81 00:16:14.966 lat (usec): min=12500, max=21084, avg=16598.51, stdev=1557.96 00:16:14.966 clat percentiles (usec): 00:16:14.966 | 1.00th=[12518], 5.00th=[13829], 10.00th=[14615], 20.00th=[15139], 00:16:14.966 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16319], 60.00th=[16712], 00:16:14.966 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18744], 95.00th=[19268], 00:16:14.966 | 99.00th=[20055], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:16:14.966 | 99.99th=[21103] 00:16:14.966 write: IOPS=4073, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:16:14.966 slat (usec): min=11, max=4607, avg=127.58, stdev=531.08 00:16:14.966 clat (usec): min=260, max=21471, avg=16517.08, stdev=2264.02 00:16:14.966 lat (usec): min=3792, max=21525, avg=16644.66, stdev=2245.43 00:16:14.966 clat percentiles (usec): 00:16:14.966 | 1.00th=[ 9110], 5.00th=[13435], 10.00th=[13960], 20.00th=[15139], 00:16:14.966 | 30.00th=[15926], 40.00th=[16319], 50.00th=[16581], 60.00th=[16909], 00:16:14.966 | 70.00th=[17171], 80.00th=[18220], 90.00th=[19530], 95.00th=[20055], 00:16:14.966 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:16:14.966 | 99.99th=[21365] 00:16:14.966 bw ( KiB/s): min=15280, max=16416, per=31.24%, avg=15848.00, stdev=803.27, samples=2 00:16:14.966 iops : min= 3820, max= 4104, avg=3962.00, stdev=200.82, samples=2 00:16:14.966 lat (usec) : 500=0.01% 00:16:14.966 lat (msec) : 4=0.08%, 10=0.80%, 20=95.45%, 50=3.66% 00:16:14.966 cpu : usr=4.09%, sys=11.58%, ctx=559, majf=0, minf=13 00:16:14.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:14.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.966 issued rwts: total=3584,4086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.966 job3: (groupid=0, jobs=1): err= 0: pid=87542: Mon Nov 18 15:02:38 2024 00:16:14.966 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:16:14.966 slat (usec): min=6, max=15914, avg=318.09, stdev=1401.19 00:16:14.966 clat (usec): min=19801, max=61688, avg=40349.93, stdev=7943.60 00:16:14.966 lat (usec): min=24208, max=61715, avg=40668.02, stdev=7885.17 00:16:14.966 clat percentiles (usec): 00:16:14.966 | 1.00th=[24249], 5.00th=[28181], 10.00th=[31327], 20.00th=[34866], 00:16:14.966 | 30.00th=[36439], 40.00th=[36963], 50.00th=[38536], 60.00th=[41157], 00:16:14.966 | 70.00th=[43254], 80.00th=[45351], 90.00th=[52167], 95.00th=[58983], 00:16:14.966 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:16:14.966 | 99.99th=[61604] 00:16:14.966 write: IOPS=2006, BW=8028KiB/s (8221kB/s)(8068KiB/1005msec); 0 zone resets 00:16:14.966 slat (usec): min=17, max=9808, avg=247.09, stdev=1240.73 00:16:14.966 clat (usec): min=1462, max=44574, avg=31469.45, stdev=6905.77 00:16:14.966 lat (usec): min=5388, max=44601, avg=31716.54, stdev=6847.12 00:16:14.966 clat percentiles (usec): 00:16:14.966 | 1.00th=[ 5932], 5.00th=[20317], 10.00th=[24773], 20.00th=[27395], 00:16:14.966 | 30.00th=[28705], 40.00th=[29230], 50.00th=[31327], 60.00th=[32375], 00:16:14.966 | 70.00th=[34341], 80.00th=[39060], 90.00th=[40109], 95.00th=[41157], 00:16:14.966 | 99.00th=[41681], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:16:14.966 | 99.99th=[44827] 00:16:14.966 bw ( KiB/s): min= 7048, max= 8080, per=14.91%, avg=7564.00, stdev=729.73, samples=2 00:16:14.966 iops : min= 1762, max= 2020, avg=1891.00, stdev=182.43, samples=2 00:16:14.966 lat (msec) : 2=0.03%, 10=0.90%, 20=1.91%, 50=92.29%, 100=4.87% 00:16:14.966 cpu : usr=1.49%, sys=6.37%, ctx=148, majf=0, minf=19 00:16:14.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:14.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.966 issued rwts: total=1536,2017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.966 00:16:14.966 Run status group 0 (all jobs): 00:16:14.966 READ: bw=43.8MiB/s (46.0MB/s), 6113KiB/s-16.2MiB/s (6260kB/s-16.9MB/s), io=44.1MiB (46.2MB), run=1003-1006msec 00:16:14.966 WRITE: bw=49.5MiB/s (51.9MB/s), 8028KiB/s-17.9MiB/s (8221kB/s-18.8MB/s), io=49.8MiB (52.3MB), run=1003-1006msec 00:16:14.966 00:16:14.966 Disk stats (read/write): 00:16:14.966 nvme0n1: ios=3634/3843, merge=0/0, ticks=12260/12189, in_queue=24449, util=87.88% 00:16:14.966 nvme0n2: ios=1585/1884, merge=0/0, ticks=16697/16790, in_queue=33487, util=88.27% 00:16:14.966 nvme0n3: ios=3072/3543, merge=0/0, ticks=11792/13004, in_queue=24796, util=88.95% 00:16:14.966 nvme0n4: ios=1312/1536, merge=0/0, ticks=13611/11590, in_queue=25201, util=89.50% 00:16:14.966 15:02:38 -- target/fio.sh@55 -- # sync 00:16:14.966 15:02:38 -- target/fio.sh@59 -- # fio_pid=87556 00:16:14.966 15:02:38 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:14.966 15:02:38 -- target/fio.sh@61 -- # sleep 3 00:16:14.966 [global] 00:16:14.966 thread=1 00:16:14.966 invalidate=1 00:16:14.966 rw=read 00:16:14.966 time_based=1 00:16:14.966 runtime=10 00:16:14.966 ioengine=libaio 00:16:14.966 direct=1 00:16:14.966 bs=4096 00:16:14.966 iodepth=1 00:16:14.966 norandommap=1 00:16:14.966 numjobs=1 00:16:14.966 00:16:14.966 [job0] 00:16:14.966 filename=/dev/nvme0n1 00:16:14.966 [job1] 00:16:14.966 filename=/dev/nvme0n2 00:16:14.966 [job2] 00:16:14.966 filename=/dev/nvme0n3 00:16:14.966 [job3] 00:16:14.966 filename=/dev/nvme0n4 00:16:14.966 Could not set queue depth (nvme0n1) 00:16:14.966 Could not set queue depth (nvme0n2) 00:16:14.966 Could not set queue depth (nvme0n3) 00:16:14.966 Could not set queue depth (nvme0n4) 00:16:15.225 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.225 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.225 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.225 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.225 fio-3.35 00:16:15.225 Starting 4 threads 00:16:18.510 15:02:41 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:18.510 fio: pid=87599, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:18.510 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37588992, buflen=4096 00:16:18.510 15:02:41 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:18.510 fio: pid=87598, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:18.510 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38002688, buflen=4096 00:16:18.510 15:02:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.510 15:02:41 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:18.768 fio: pid=87596, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:18.768 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50810880, buflen=4096 00:16:18.768 15:02:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.768 15:02:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:19.026 fio: pid=87597, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:19.026 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51355648, buflen=4096 00:16:19.026 00:16:19.026 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87596: Mon Nov 18 15:02:42 2024 00:16:19.026 read: IOPS=3630, BW=14.2MiB/s (14.9MB/s)(48.5MiB/3417msec) 00:16:19.026 slat (usec): min=11, max=19784, avg=20.64, stdev=210.35 00:16:19.026 clat (usec): min=116, max=3221, avg=253.33, stdev=67.34 00:16:19.026 lat (usec): min=130, max=20783, avg=273.96, stdev=226.19 00:16:19.026 clat percentiles (usec): 00:16:19.026 | 1.00th=[ 149], 5.00th=[ 174], 10.00th=[ 190], 20.00th=[ 208], 00:16:19.026 | 30.00th=[ 227], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 269], 00:16:19.026 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:16:19.026 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 652], 99.95th=[ 1221], 00:16:19.026 | 99.99th=[ 2900] 00:16:19.026 bw ( KiB/s): min=13168, max=17248, per=29.95%, avg=14172.00, stdev=1525.78, samples=6 00:16:19.026 iops : min= 3292, max= 4312, avg=3543.00, stdev=381.45, samples=6 00:16:19.026 lat (usec) : 250=43.33%, 500=56.48%, 750=0.10%, 1000=0.02% 00:16:19.026 lat (msec) : 2=0.03%, 4=0.03% 00:16:19.026 cpu : usr=0.88%, sys=5.18%, ctx=12418, majf=0, minf=1 00:16:19.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.026 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.026 issued rwts: total=12406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.026 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87597: Mon Nov 18 15:02:42 2024 00:16:19.026 read: IOPS=3417, BW=13.3MiB/s (14.0MB/s)(49.0MiB/3669msec) 00:16:19.026 slat (usec): min=10, max=16449, avg=23.11, stdev=251.24 00:16:19.026 clat (usec): min=3, max=3048, avg=267.75, stdev=82.74 00:16:19.026 lat (usec): min=133, max=16709, avg=290.86, stdev=265.36 00:16:19.026 clat percentiles (usec): 00:16:19.026 | 1.00th=[ 141], 5.00th=[ 159], 10.00th=[ 174], 20.00th=[ 210], 00:16:19.026 | 30.00th=[ 245], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:16:19.026 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 363], 00:16:19.026 | 99.00th=[ 453], 99.50th=[ 482], 99.90th=[ 988], 99.95th=[ 1811], 00:16:19.026 | 99.99th=[ 2474] 00:16:19.026 bw ( KiB/s): min=10344, max=15921, per=28.63%, avg=13546.43, stdev=1643.05, samples=7 00:16:19.026 iops : min= 2586, max= 3980, avg=3386.57, stdev=410.70, samples=7 00:16:19.026 lat (usec) : 4=0.02%, 250=32.74%, 500=66.91%, 750=0.21%, 1000=0.02% 00:16:19.026 lat (msec) : 2=0.07%, 4=0.02% 00:16:19.026 cpu : usr=1.12%, sys=4.77%, ctx=12554, majf=0, minf=1 00:16:19.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.026 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.026 issued rwts: total=12539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.026 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87598: Mon Nov 18 15:02:42 2024 00:16:19.026 read: IOPS=2899, BW=11.3MiB/s (11.9MB/s)(36.2MiB/3200msec) 00:16:19.026 slat (usec): min=9, max=7437, avg=17.64, stdev=106.19 00:16:19.026 clat (usec): min=3, max=4009, avg=325.42, stdev=72.16 00:16:19.026 lat (usec): min=139, max=7666, avg=343.06, stdev=129.56 00:16:19.026 clat percentiles (usec): 00:16:19.026 | 1.00th=[ 206], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:16:19.026 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 326], 00:16:19.026 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 383], 95.00th=[ 408], 00:16:19.026 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 717], 99.95th=[ 1663], 00:16:19.026 | 99.99th=[ 4015] 00:16:19.027 bw ( KiB/s): min=10352, max=12032, per=24.56%, avg=11620.00, stdev=627.39, samples=6 00:16:19.027 iops : min= 2588, max= 3008, avg=2905.00, stdev=156.85, samples=6 00:16:19.027 lat (usec) : 4=0.01%, 250=2.46%, 500=97.15%, 750=0.27%, 1000=0.01% 00:16:19.027 lat (msec) : 2=0.06%, 4=0.01%, 10=0.01% 00:16:19.027 cpu : usr=0.88%, sys=3.97%, ctx=9296, majf=0, minf=2 00:16:19.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.027 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.027 issued rwts: total=9279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.027 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87599: Mon Nov 18 15:02:42 2024 00:16:19.027 read: IOPS=3114, BW=12.2MiB/s (12.8MB/s)(35.8MiB/2947msec) 00:16:19.027 slat (nsec): min=7916, max=73825, avg=15179.88, stdev=4678.83 00:16:19.027 clat (usec): min=161, max=7856, avg=304.40, stdev=105.85 00:16:19.027 lat (usec): min=175, max=7871, avg=319.58, stdev=105.33 00:16:19.027 clat percentiles (usec): 00:16:19.027 | 1.00th=[ 192], 5.00th=[ 210], 10.00th=[ 225], 20.00th=[ 265], 00:16:19.027 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 314], 00:16:19.027 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 371], 95.00th=[ 396], 00:16:19.027 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 783], 99.95th=[ 1483], 00:16:19.027 | 99.99th=[ 7832] 00:16:19.027 bw ( KiB/s): min=11792, max=15608, per=26.60%, avg=12587.20, stdev=1689.23, samples=5 00:16:19.027 iops : min= 2948, max= 3902, avg=3146.80, stdev=422.31, samples=5 00:16:19.027 lat (usec) : 250=17.73%, 500=82.04%, 750=0.10%, 1000=0.05% 00:16:19.027 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:16:19.027 cpu : usr=0.78%, sys=4.07%, ctx=9179, majf=0, minf=2 00:16:19.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.027 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.027 issued rwts: total=9178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:19.027 00:16:19.027 Run status group 0 (all jobs): 00:16:19.027 READ: bw=46.2MiB/s (48.4MB/s), 11.3MiB/s-14.2MiB/s (11.9MB/s-14.9MB/s), io=170MiB (178MB), run=2947-3669msec 00:16:19.027 00:16:19.027 Disk stats (read/write): 00:16:19.027 nvme0n1: ios=12198/0, merge=0/0, ticks=3165/0, in_queue=3165, util=95.28% 00:16:19.027 nvme0n2: ios=12299/0, merge=0/0, ticks=3371/0, in_queue=3371, util=95.10% 00:16:19.027 nvme0n3: ios=9037/0, merge=0/0, ticks=2958/0, in_queue=2958, util=96.49% 00:16:19.027 nvme0n4: ios=8942/0, merge=0/0, ticks=2732/0, in_queue=2732, util=96.59% 00:16:19.027 15:02:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.027 15:02:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:19.285 15:02:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.285 15:02:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:19.543 15:02:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.543 15:02:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:19.801 15:02:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.801 15:02:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:20.059 15:02:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.059 15:02:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:20.317 15:02:43 -- target/fio.sh@69 -- # fio_status=0 00:16:20.317 15:02:43 -- target/fio.sh@70 -- # wait 87556 00:16:20.317 15:02:43 -- target/fio.sh@70 -- # fio_status=4 00:16:20.317 15:02:43 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.317 15:02:43 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.317 15:02:43 -- common/autotest_common.sh@1208 -- # local i=0 00:16:20.317 15:02:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.317 15:02:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:20.317 15:02:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.317 15:02:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:20.317 nvmf hotplug test: fio failed as expected 00:16:20.317 15:02:43 -- common/autotest_common.sh@1220 -- # return 0 00:16:20.317 15:02:43 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:20.317 15:02:43 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:20.317 15:02:43 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.575 15:02:43 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:20.575 15:02:43 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:20.575 15:02:43 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:20.575 15:02:43 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:20.575 15:02:43 -- target/fio.sh@91 -- # nvmftestfini 00:16:20.575 15:02:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:20.575 15:02:43 -- nvmf/common.sh@116 -- # sync 00:16:20.575 15:02:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:20.575 15:02:43 -- nvmf/common.sh@119 -- # set +e 00:16:20.575 15:02:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:20.575 15:02:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:20.575 rmmod nvme_tcp 00:16:20.575 rmmod nvme_fabrics 00:16:20.575 rmmod nvme_keyring 00:16:20.575 15:02:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:20.575 15:02:44 -- nvmf/common.sh@123 -- # set -e 00:16:20.575 15:02:44 -- nvmf/common.sh@124 -- # return 0 00:16:20.575 15:02:44 -- nvmf/common.sh@477 -- # '[' -n 87063 ']' 00:16:20.575 15:02:44 -- nvmf/common.sh@478 -- # killprocess 87063 00:16:20.575 15:02:44 -- common/autotest_common.sh@936 -- # '[' -z 87063 ']' 00:16:20.575 15:02:44 -- common/autotest_common.sh@940 -- # kill -0 87063 00:16:20.575 15:02:44 -- common/autotest_common.sh@941 -- # uname 00:16:20.575 15:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.575 15:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87063 00:16:20.575 killing process with pid 87063 00:16:20.575 15:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:20.575 15:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:20.575 15:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87063' 00:16:20.575 15:02:44 -- common/autotest_common.sh@955 -- # kill 87063 00:16:20.575 15:02:44 -- common/autotest_common.sh@960 -- # wait 87063 00:16:20.833 15:02:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:20.833 15:02:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:20.833 15:02:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:20.833 15:02:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.833 15:02:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:20.833 15:02:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.833 15:02:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.833 15:02:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.833 15:02:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:20.833 00:16:20.833 real 0m19.390s 00:16:20.833 user 1m14.196s 00:16:20.833 sys 0m7.981s 00:16:20.833 15:02:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:20.833 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:16:20.833 ************************************ 00:16:20.833 END TEST nvmf_fio_target 00:16:20.833 ************************************ 00:16:20.833 15:02:44 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:20.833 15:02:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:20.833 15:02:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.833 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:16:20.833 ************************************ 00:16:20.833 START TEST nvmf_bdevio 00:16:20.833 ************************************ 00:16:20.833 15:02:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:21.092 * Looking for test storage... 00:16:21.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:21.092 15:02:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:21.092 15:02:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:21.092 15:02:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:21.092 15:02:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:21.092 15:02:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:21.092 15:02:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:21.092 15:02:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:21.092 15:02:44 -- scripts/common.sh@335 -- # IFS=.-: 00:16:21.092 15:02:44 -- scripts/common.sh@335 -- # read -ra ver1 00:16:21.092 15:02:44 -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.092 15:02:44 -- scripts/common.sh@336 -- # read -ra ver2 00:16:21.092 15:02:44 -- scripts/common.sh@337 -- # local 'op=<' 00:16:21.092 15:02:44 -- scripts/common.sh@339 -- # ver1_l=2 00:16:21.092 15:02:44 -- scripts/common.sh@340 -- # ver2_l=1 00:16:21.092 15:02:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:21.092 15:02:44 -- scripts/common.sh@343 -- # case "$op" in 00:16:21.092 15:02:44 -- scripts/common.sh@344 -- # : 1 00:16:21.092 15:02:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:21.092 15:02:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.092 15:02:44 -- scripts/common.sh@364 -- # decimal 1 00:16:21.092 15:02:44 -- scripts/common.sh@352 -- # local d=1 00:16:21.092 15:02:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.092 15:02:44 -- scripts/common.sh@354 -- # echo 1 00:16:21.092 15:02:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:21.092 15:02:44 -- scripts/common.sh@365 -- # decimal 2 00:16:21.092 15:02:44 -- scripts/common.sh@352 -- # local d=2 00:16:21.092 15:02:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.092 15:02:44 -- scripts/common.sh@354 -- # echo 2 00:16:21.092 15:02:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:21.092 15:02:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:21.092 15:02:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:21.092 15:02:44 -- scripts/common.sh@367 -- # return 0 00:16:21.092 15:02:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.092 15:02:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:21.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.092 --rc genhtml_branch_coverage=1 00:16:21.092 --rc genhtml_function_coverage=1 00:16:21.092 --rc genhtml_legend=1 00:16:21.092 --rc geninfo_all_blocks=1 00:16:21.092 --rc geninfo_unexecuted_blocks=1 00:16:21.092 00:16:21.092 ' 00:16:21.092 15:02:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:21.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.092 --rc genhtml_branch_coverage=1 00:16:21.092 --rc genhtml_function_coverage=1 00:16:21.092 --rc genhtml_legend=1 00:16:21.092 --rc geninfo_all_blocks=1 00:16:21.092 --rc geninfo_unexecuted_blocks=1 00:16:21.092 00:16:21.092 ' 00:16:21.092 15:02:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:21.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.092 --rc genhtml_branch_coverage=1 00:16:21.092 --rc genhtml_function_coverage=1 00:16:21.092 --rc genhtml_legend=1 00:16:21.092 --rc geninfo_all_blocks=1 00:16:21.092 --rc geninfo_unexecuted_blocks=1 00:16:21.092 00:16:21.092 ' 00:16:21.092 15:02:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:21.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.092 --rc genhtml_branch_coverage=1 00:16:21.092 --rc genhtml_function_coverage=1 00:16:21.092 --rc genhtml_legend=1 00:16:21.092 --rc geninfo_all_blocks=1 00:16:21.092 --rc geninfo_unexecuted_blocks=1 00:16:21.092 00:16:21.092 ' 00:16:21.092 15:02:44 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.092 15:02:44 -- nvmf/common.sh@7 -- # uname -s 00:16:21.092 15:02:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.092 15:02:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.092 15:02:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.092 15:02:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.092 15:02:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.092 15:02:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.092 15:02:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.092 15:02:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.092 15:02:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.092 15:02:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.092 15:02:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:16:21.092 15:02:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:16:21.092 15:02:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.092 15:02:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.092 15:02:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.092 15:02:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.092 15:02:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.092 15:02:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.092 15:02:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.093 15:02:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.093 15:02:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.093 15:02:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.093 15:02:44 -- paths/export.sh@5 -- # export PATH 00:16:21.093 15:02:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.093 15:02:44 -- nvmf/common.sh@46 -- # : 0 00:16:21.093 15:02:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:21.093 15:02:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:21.093 15:02:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:21.093 15:02:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.093 15:02:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.093 15:02:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:21.093 15:02:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:21.093 15:02:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:21.093 15:02:44 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.093 15:02:44 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.093 15:02:44 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:21.093 15:02:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:21.093 15:02:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.093 15:02:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:21.093 15:02:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:21.093 15:02:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:21.093 15:02:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.093 15:02:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.093 15:02:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.093 15:02:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:21.093 15:02:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:21.093 15:02:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:21.093 15:02:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:21.093 15:02:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:21.093 15:02:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:21.093 15:02:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.093 15:02:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.093 15:02:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:21.093 15:02:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:21.093 15:02:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.093 15:02:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.093 15:02:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.093 15:02:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.093 15:02:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.093 15:02:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.093 15:02:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.093 15:02:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.093 15:02:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:21.093 15:02:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:21.093 Cannot find device "nvmf_tgt_br" 00:16:21.093 15:02:44 -- nvmf/common.sh@154 -- # true 00:16:21.093 15:02:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.093 Cannot find device "nvmf_tgt_br2" 00:16:21.093 15:02:44 -- nvmf/common.sh@155 -- # true 00:16:21.093 15:02:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:21.093 15:02:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:21.093 Cannot find device "nvmf_tgt_br" 00:16:21.093 15:02:44 -- nvmf/common.sh@157 -- # true 00:16:21.093 15:02:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:21.351 Cannot find device "nvmf_tgt_br2" 00:16:21.351 15:02:44 -- nvmf/common.sh@158 -- # true 00:16:21.351 15:02:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:21.351 15:02:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:21.351 15:02:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.351 15:02:44 -- nvmf/common.sh@161 -- # true 00:16:21.351 15:02:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.351 15:02:44 -- nvmf/common.sh@162 -- # true 00:16:21.351 15:02:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.352 15:02:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.352 15:02:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:21.352 15:02:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:21.352 15:02:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:21.352 15:02:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:21.352 15:02:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:21.352 15:02:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:21.352 15:02:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:21.352 15:02:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:21.352 15:02:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:21.352 15:02:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:21.352 15:02:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:21.352 15:02:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:21.352 15:02:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:21.352 15:02:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.352 15:02:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:21.352 15:02:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:21.352 15:02:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.352 15:02:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.352 15:02:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.610 15:02:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.610 15:02:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.610 15:02:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:21.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:21.610 00:16:21.610 --- 10.0.0.2 ping statistics --- 00:16:21.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.610 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:21.610 15:02:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:21.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:21.610 00:16:21.610 --- 10.0.0.3 ping statistics --- 00:16:21.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.610 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:21.610 15:02:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:21.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:21.610 00:16:21.610 --- 10.0.0.1 ping statistics --- 00:16:21.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.610 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:21.610 15:02:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.610 15:02:44 -- nvmf/common.sh@421 -- # return 0 00:16:21.610 15:02:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:21.610 15:02:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.610 15:02:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:21.610 15:02:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:21.610 15:02:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.610 15:02:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:21.610 15:02:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:21.610 15:02:44 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:21.610 15:02:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:21.610 15:02:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.610 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:16:21.610 15:02:44 -- nvmf/common.sh@469 -- # nvmfpid=87933 00:16:21.610 15:02:44 -- nvmf/common.sh@470 -- # waitforlisten 87933 00:16:21.610 15:02:44 -- common/autotest_common.sh@829 -- # '[' -z 87933 ']' 00:16:21.610 15:02:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:21.610 15:02:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.610 15:02:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.610 15:02:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.610 15:02:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.610 15:02:44 -- common/autotest_common.sh@10 -- # set +x 00:16:21.610 [2024-11-18 15:02:45.041089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:21.610 [2024-11-18 15:02:45.041167] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.610 [2024-11-18 15:02:45.181607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.869 [2024-11-18 15:02:45.239130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:21.869 [2024-11-18 15:02:45.239271] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.869 [2024-11-18 15:02:45.239284] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.869 [2024-11-18 15:02:45.239292] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.869 [2024-11-18 15:02:45.239472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:21.869 [2024-11-18 15:02:45.239646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:21.869 [2024-11-18 15:02:45.239757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:21.869 [2024-11-18 15:02:45.239758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.803 15:02:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.803 15:02:46 -- common/autotest_common.sh@862 -- # return 0 00:16:22.803 15:02:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:22.803 15:02:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:22.803 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.803 15:02:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.803 15:02:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.803 15:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.803 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.803 [2024-11-18 15:02:46.113462] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.803 15:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.803 15:02:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:22.803 15:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.803 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.803 Malloc0 00:16:22.803 15:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.803 15:02:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:22.803 15:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.803 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.803 15:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.803 15:02:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:22.803 15:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.803 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.803 15:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.803 15:02:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.803 15:02:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.803 15:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:22.803 [2024-11-18 15:02:46.182405] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.803 15:02:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.803 15:02:46 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:22.803 15:02:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:22.803 15:02:46 -- nvmf/common.sh@520 -- # config=() 00:16:22.803 15:02:46 -- nvmf/common.sh@520 -- # local subsystem config 00:16:22.803 15:02:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:22.803 15:02:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:22.803 { 00:16:22.803 "params": { 00:16:22.803 "name": "Nvme$subsystem", 00:16:22.803 "trtype": "$TEST_TRANSPORT", 00:16:22.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.803 "adrfam": "ipv4", 00:16:22.803 "trsvcid": "$NVMF_PORT", 00:16:22.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.803 "hdgst": ${hdgst:-false}, 00:16:22.803 "ddgst": ${ddgst:-false} 00:16:22.803 }, 00:16:22.803 "method": "bdev_nvme_attach_controller" 00:16:22.803 } 00:16:22.803 EOF 00:16:22.803 )") 00:16:22.803 15:02:46 -- nvmf/common.sh@542 -- # cat 00:16:22.803 15:02:46 -- nvmf/common.sh@544 -- # jq . 00:16:22.803 15:02:46 -- nvmf/common.sh@545 -- # IFS=, 00:16:22.803 15:02:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:22.803 "params": { 00:16:22.803 "name": "Nvme1", 00:16:22.803 "trtype": "tcp", 00:16:22.803 "traddr": "10.0.0.2", 00:16:22.803 "adrfam": "ipv4", 00:16:22.803 "trsvcid": "4420", 00:16:22.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.803 "hdgst": false, 00:16:22.803 "ddgst": false 00:16:22.803 }, 00:16:22.803 "method": "bdev_nvme_attach_controller" 00:16:22.803 }' 00:16:22.803 [2024-11-18 15:02:46.226800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:22.803 [2024-11-18 15:02:46.226862] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87993 ] 00:16:22.803 [2024-11-18 15:02:46.362524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:23.061 [2024-11-18 15:02:46.446349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.061 [2024-11-18 15:02:46.446485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.061 [2024-11-18 15:02:46.446496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.319 [2024-11-18 15:02:46.652823] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:23.319 [2024-11-18 15:02:46.652886] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:23.319 I/O targets: 00:16:23.319 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:23.319 00:16:23.319 00:16:23.319 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.319 http://cunit.sourceforge.net/ 00:16:23.319 00:16:23.319 00:16:23.319 Suite: bdevio tests on: Nvme1n1 00:16:23.319 Test: blockdev write read block ...passed 00:16:23.319 Test: blockdev write zeroes read block ...passed 00:16:23.319 Test: blockdev write zeroes read no split ...passed 00:16:23.319 Test: blockdev write zeroes read split ...passed 00:16:23.319 Test: blockdev write zeroes read split partial ...passed 00:16:23.319 Test: blockdev reset ...[2024-11-18 15:02:46.772376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:23.319 [2024-11-18 15:02:46.772509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1344ed0 (9): Bad file descriptor 00:16:23.319 [2024-11-18 15:02:46.792884] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:23.319 passed 00:16:23.319 Test: blockdev write read 8 blocks ...passed 00:16:23.319 Test: blockdev write read size > 128k ...passed 00:16:23.319 Test: blockdev write read invalid size ...passed 00:16:23.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:23.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:23.319 Test: blockdev write read max offset ...passed 00:16:23.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:23.577 Test: blockdev writev readv 8 blocks ...passed 00:16:23.577 Test: blockdev writev readv 30 x 1block ...passed 00:16:23.577 Test: blockdev writev readv block ...passed 00:16:23.577 Test: blockdev writev readv size > 128k ...passed 00:16:23.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:23.577 Test: blockdev comparev and writev ...[2024-11-18 15:02:46.964065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.577 [2024-11-18 15:02:46.964226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:46.964324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.577 [2024-11-18 15:02:46.964442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:46.964957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.577 [2024-11-18 15:02:46.965087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:46.965174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.577 [2024-11-18 15:02:46.965262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:46.965742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.577 [2024-11-18 15:02:46.965859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:46.965960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.577 [2024-11-18 15:02:46.966030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:46.966493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.577 [2024-11-18 15:02:46.966609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:46.966719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.577 [2024-11-18 15:02:46.966790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:23.577 passed 00:16:23.577 Test: blockdev nvme passthru rw ...passed 00:16:23.577 Test: blockdev nvme passthru vendor specific ...[2024-11-18 15:02:47.048630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:23.577 [2024-11-18 15:02:47.048752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:47.048989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:23.577 [2024-11-18 15:02:47.049094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:47.049280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:23.577 [2024-11-18 15:02:47.049399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:23.577 [2024-11-18 15:02:47.049598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:23.577 [2024-11-18 15:02:47.049696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:23.577 passed 00:16:23.577 Test: blockdev nvme admin passthru ...passed 00:16:23.577 Test: blockdev copy ...passed 00:16:23.577 00:16:23.577 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.577 suites 1 1 n/a 0 0 00:16:23.577 tests 23 23 23 0 0 00:16:23.577 asserts 152 152 152 0 n/a 00:16:23.577 00:16:23.577 Elapsed time = 0.897 seconds 00:16:23.834 15:02:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.834 15:02:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.834 15:02:47 -- common/autotest_common.sh@10 -- # set +x 00:16:23.834 15:02:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.834 15:02:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:23.834 15:02:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:23.834 15:02:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:23.834 15:02:47 -- nvmf/common.sh@116 -- # sync 00:16:23.834 15:02:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:23.834 15:02:47 -- nvmf/common.sh@119 -- # set +e 00:16:23.834 15:02:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:23.834 15:02:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:24.092 rmmod nvme_tcp 00:16:24.092 rmmod nvme_fabrics 00:16:24.092 rmmod nvme_keyring 00:16:24.092 15:02:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:24.092 15:02:47 -- nvmf/common.sh@123 -- # set -e 00:16:24.092 15:02:47 -- nvmf/common.sh@124 -- # return 0 00:16:24.092 15:02:47 -- nvmf/common.sh@477 -- # '[' -n 87933 ']' 00:16:24.092 15:02:47 -- nvmf/common.sh@478 -- # killprocess 87933 00:16:24.092 15:02:47 -- common/autotest_common.sh@936 -- # '[' -z 87933 ']' 00:16:24.092 15:02:47 -- common/autotest_common.sh@940 -- # kill -0 87933 00:16:24.092 15:02:47 -- common/autotest_common.sh@941 -- # uname 00:16:24.092 15:02:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:24.092 15:02:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87933 00:16:24.092 15:02:47 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:24.092 15:02:47 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:24.092 killing process with pid 87933 00:16:24.092 15:02:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87933' 00:16:24.092 15:02:47 -- common/autotest_common.sh@955 -- # kill 87933 00:16:24.092 15:02:47 -- common/autotest_common.sh@960 -- # wait 87933 00:16:24.351 15:02:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:24.351 15:02:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:24.351 15:02:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:24.351 15:02:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.351 15:02:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:24.351 15:02:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.351 15:02:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.351 15:02:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.351 15:02:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:24.351 00:16:24.351 real 0m3.422s 00:16:24.351 user 0m12.370s 00:16:24.351 sys 0m0.861s 00:16:24.351 15:02:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:24.351 15:02:47 -- common/autotest_common.sh@10 -- # set +x 00:16:24.351 ************************************ 00:16:24.351 END TEST nvmf_bdevio 00:16:24.351 ************************************ 00:16:24.351 15:02:47 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:24.351 15:02:47 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:24.351 15:02:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:24.351 15:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.351 15:02:47 -- common/autotest_common.sh@10 -- # set +x 00:16:24.351 ************************************ 00:16:24.351 START TEST nvmf_bdevio_no_huge 00:16:24.351 ************************************ 00:16:24.351 15:02:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:24.607 * Looking for test storage... 00:16:24.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:24.607 15:02:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:24.607 15:02:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:24.607 15:02:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:24.607 15:02:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:24.607 15:02:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:24.608 15:02:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:24.608 15:02:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:24.608 15:02:48 -- scripts/common.sh@335 -- # IFS=.-: 00:16:24.608 15:02:48 -- scripts/common.sh@335 -- # read -ra ver1 00:16:24.608 15:02:48 -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.608 15:02:48 -- scripts/common.sh@336 -- # read -ra ver2 00:16:24.608 15:02:48 -- scripts/common.sh@337 -- # local 'op=<' 00:16:24.608 15:02:48 -- scripts/common.sh@339 -- # ver1_l=2 00:16:24.608 15:02:48 -- scripts/common.sh@340 -- # ver2_l=1 00:16:24.608 15:02:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:24.608 15:02:48 -- scripts/common.sh@343 -- # case "$op" in 00:16:24.608 15:02:48 -- scripts/common.sh@344 -- # : 1 00:16:24.608 15:02:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:24.608 15:02:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.608 15:02:48 -- scripts/common.sh@364 -- # decimal 1 00:16:24.608 15:02:48 -- scripts/common.sh@352 -- # local d=1 00:16:24.608 15:02:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.608 15:02:48 -- scripts/common.sh@354 -- # echo 1 00:16:24.608 15:02:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:24.608 15:02:48 -- scripts/common.sh@365 -- # decimal 2 00:16:24.608 15:02:48 -- scripts/common.sh@352 -- # local d=2 00:16:24.608 15:02:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.608 15:02:48 -- scripts/common.sh@354 -- # echo 2 00:16:24.608 15:02:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:24.608 15:02:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:24.608 15:02:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:24.608 15:02:48 -- scripts/common.sh@367 -- # return 0 00:16:24.608 15:02:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.608 15:02:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:24.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.608 --rc genhtml_branch_coverage=1 00:16:24.608 --rc genhtml_function_coverage=1 00:16:24.608 --rc genhtml_legend=1 00:16:24.608 --rc geninfo_all_blocks=1 00:16:24.608 --rc geninfo_unexecuted_blocks=1 00:16:24.608 00:16:24.608 ' 00:16:24.608 15:02:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:24.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.608 --rc genhtml_branch_coverage=1 00:16:24.608 --rc genhtml_function_coverage=1 00:16:24.608 --rc genhtml_legend=1 00:16:24.608 --rc geninfo_all_blocks=1 00:16:24.608 --rc geninfo_unexecuted_blocks=1 00:16:24.608 00:16:24.608 ' 00:16:24.608 15:02:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:24.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.608 --rc genhtml_branch_coverage=1 00:16:24.608 --rc genhtml_function_coverage=1 00:16:24.608 --rc genhtml_legend=1 00:16:24.608 --rc geninfo_all_blocks=1 00:16:24.608 --rc geninfo_unexecuted_blocks=1 00:16:24.608 00:16:24.608 ' 00:16:24.608 15:02:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:24.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.608 --rc genhtml_branch_coverage=1 00:16:24.608 --rc genhtml_function_coverage=1 00:16:24.608 --rc genhtml_legend=1 00:16:24.608 --rc geninfo_all_blocks=1 00:16:24.608 --rc geninfo_unexecuted_blocks=1 00:16:24.608 00:16:24.608 ' 00:16:24.608 15:02:48 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.608 15:02:48 -- nvmf/common.sh@7 -- # uname -s 00:16:24.608 15:02:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.608 15:02:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.608 15:02:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.608 15:02:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.608 15:02:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.608 15:02:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.608 15:02:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.608 15:02:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.608 15:02:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.608 15:02:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.608 15:02:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:16:24.608 15:02:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:16:24.608 15:02:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.608 15:02:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.608 15:02:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.608 15:02:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.608 15:02:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.608 15:02:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.608 15:02:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.608 15:02:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.608 15:02:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.608 15:02:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.608 15:02:48 -- paths/export.sh@5 -- # export PATH 00:16:24.608 15:02:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.608 15:02:48 -- nvmf/common.sh@46 -- # : 0 00:16:24.608 15:02:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:24.608 15:02:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:24.608 15:02:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:24.608 15:02:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.608 15:02:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.608 15:02:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:24.608 15:02:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:24.608 15:02:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:24.608 15:02:48 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.608 15:02:48 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.608 15:02:48 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:24.608 15:02:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:24.608 15:02:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.608 15:02:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:24.608 15:02:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:24.608 15:02:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:24.608 15:02:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.608 15:02:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.608 15:02:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.608 15:02:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:24.608 15:02:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:24.608 15:02:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:24.608 15:02:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:24.608 15:02:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:24.608 15:02:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:24.608 15:02:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.608 15:02:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.608 15:02:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:24.608 15:02:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:24.608 15:02:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:24.608 15:02:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:24.608 15:02:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:24.608 15:02:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.608 15:02:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:24.609 15:02:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:24.609 15:02:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:24.609 15:02:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:24.609 15:02:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:24.609 15:02:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:24.609 Cannot find device "nvmf_tgt_br" 00:16:24.609 15:02:48 -- nvmf/common.sh@154 -- # true 00:16:24.609 15:02:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.609 Cannot find device "nvmf_tgt_br2" 00:16:24.609 15:02:48 -- nvmf/common.sh@155 -- # true 00:16:24.609 15:02:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:24.609 15:02:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:24.609 Cannot find device "nvmf_tgt_br" 00:16:24.609 15:02:48 -- nvmf/common.sh@157 -- # true 00:16:24.609 15:02:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:24.609 Cannot find device "nvmf_tgt_br2" 00:16:24.609 15:02:48 -- nvmf/common.sh@158 -- # true 00:16:24.609 15:02:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:24.609 15:02:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:24.866 15:02:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.866 15:02:48 -- nvmf/common.sh@161 -- # true 00:16:24.866 15:02:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.866 15:02:48 -- nvmf/common.sh@162 -- # true 00:16:24.866 15:02:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:24.866 15:02:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:24.866 15:02:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:24.866 15:02:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:24.866 15:02:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:24.866 15:02:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:24.866 15:02:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:24.866 15:02:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:24.866 15:02:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:24.866 15:02:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:24.866 15:02:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:24.866 15:02:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:24.866 15:02:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:24.866 15:02:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:24.866 15:02:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:24.866 15:02:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:24.866 15:02:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:24.866 15:02:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:24.866 15:02:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:24.866 15:02:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.866 15:02:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.866 15:02:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.866 15:02:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.866 15:02:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:24.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:24.866 00:16:24.866 --- 10.0.0.2 ping statistics --- 00:16:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.866 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:24.866 15:02:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:24.866 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.866 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:16:24.866 00:16:24.866 --- 10.0.0.3 ping statistics --- 00:16:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.866 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:24.866 15:02:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:24.866 00:16:24.866 --- 10.0.0.1 ping statistics --- 00:16:24.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.866 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:24.866 15:02:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.866 15:02:48 -- nvmf/common.sh@421 -- # return 0 00:16:24.866 15:02:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:24.866 15:02:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.866 15:02:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:24.866 15:02:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:24.866 15:02:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.866 15:02:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:24.866 15:02:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:25.124 15:02:48 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:25.124 15:02:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:25.124 15:02:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.124 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:16:25.124 15:02:48 -- nvmf/common.sh@469 -- # nvmfpid=88181 00:16:25.124 15:02:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:25.124 15:02:48 -- nvmf/common.sh@470 -- # waitforlisten 88181 00:16:25.124 15:02:48 -- common/autotest_common.sh@829 -- # '[' -z 88181 ']' 00:16:25.124 15:02:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.124 15:02:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.124 15:02:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.124 15:02:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.124 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:16:25.124 [2024-11-18 15:02:48.525404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:25.124 [2024-11-18 15:02:48.525503] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:25.124 [2024-11-18 15:02:48.671577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.382 [2024-11-18 15:02:48.795324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:25.382 [2024-11-18 15:02:48.795517] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.382 [2024-11-18 15:02:48.795535] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.382 [2024-11-18 15:02:48.795547] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.382 [2024-11-18 15:02:48.795727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:25.382 [2024-11-18 15:02:48.796361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:25.382 [2024-11-18 15:02:48.796509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:25.382 [2024-11-18 15:02:48.796524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.947 15:02:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.947 15:02:49 -- common/autotest_common.sh@862 -- # return 0 00:16:25.947 15:02:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:25.947 15:02:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.948 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:25.948 15:02:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.948 15:02:49 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.948 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.948 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:25.948 [2024-11-18 15:02:49.516958] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.948 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.948 15:02:49 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:25.948 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.948 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:26.206 Malloc0 00:16:26.206 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.206 15:02:49 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:26.206 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.206 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:26.206 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.206 15:02:49 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.206 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.206 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:26.206 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.206 15:02:49 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.206 15:02:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.206 15:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:26.206 [2024-11-18 15:02:49.560653] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.206 15:02:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.206 15:02:49 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:26.206 15:02:49 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:26.206 15:02:49 -- nvmf/common.sh@520 -- # config=() 00:16:26.206 15:02:49 -- nvmf/common.sh@520 -- # local subsystem config 00:16:26.206 15:02:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:26.206 15:02:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:26.206 { 00:16:26.206 "params": { 00:16:26.206 "name": "Nvme$subsystem", 00:16:26.206 "trtype": "$TEST_TRANSPORT", 00:16:26.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.206 "adrfam": "ipv4", 00:16:26.206 "trsvcid": "$NVMF_PORT", 00:16:26.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.206 "hdgst": ${hdgst:-false}, 00:16:26.206 "ddgst": ${ddgst:-false} 00:16:26.206 }, 00:16:26.206 "method": "bdev_nvme_attach_controller" 00:16:26.206 } 00:16:26.206 EOF 00:16:26.206 )") 00:16:26.206 15:02:49 -- nvmf/common.sh@542 -- # cat 00:16:26.206 15:02:49 -- nvmf/common.sh@544 -- # jq . 00:16:26.206 15:02:49 -- nvmf/common.sh@545 -- # IFS=, 00:16:26.206 15:02:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:26.206 "params": { 00:16:26.206 "name": "Nvme1", 00:16:26.206 "trtype": "tcp", 00:16:26.206 "traddr": "10.0.0.2", 00:16:26.206 "adrfam": "ipv4", 00:16:26.206 "trsvcid": "4420", 00:16:26.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.206 "hdgst": false, 00:16:26.206 "ddgst": false 00:16:26.206 }, 00:16:26.206 "method": "bdev_nvme_attach_controller" 00:16:26.206 }' 00:16:26.206 [2024-11-18 15:02:49.623151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:26.206 [2024-11-18 15:02:49.623274] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88235 ] 00:16:26.206 [2024-11-18 15:02:49.770082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:26.464 [2024-11-18 15:02:49.931253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.464 [2024-11-18 15:02:49.931511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.464 [2024-11-18 15:02:49.931522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.722 [2024-11-18 15:02:50.149997] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:26.722 [2024-11-18 15:02:50.150065] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:26.722 I/O targets: 00:16:26.722 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:26.722 00:16:26.722 00:16:26.722 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.722 http://cunit.sourceforge.net/ 00:16:26.722 00:16:26.722 00:16:26.722 Suite: bdevio tests on: Nvme1n1 00:16:26.722 Test: blockdev write read block ...passed 00:16:26.722 Test: blockdev write zeroes read block ...passed 00:16:26.722 Test: blockdev write zeroes read no split ...passed 00:16:26.722 Test: blockdev write zeroes read split ...passed 00:16:26.722 Test: blockdev write zeroes read split partial ...passed 00:16:26.722 Test: blockdev reset ...[2024-11-18 15:02:50.281932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:26.722 [2024-11-18 15:02:50.282048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5820 (9): Bad file descriptor 00:16:26.722 [2024-11-18 15:02:50.298770] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:26.722 passed 00:16:26.722 Test: blockdev write read 8 blocks ...passed 00:16:26.722 Test: blockdev write read size > 128k ...passed 00:16:26.722 Test: blockdev write read invalid size ...passed 00:16:26.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:26.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:26.981 Test: blockdev write read max offset ...passed 00:16:26.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:26.981 Test: blockdev writev readv 8 blocks ...passed 00:16:26.981 Test: blockdev writev readv 30 x 1block ...passed 00:16:26.981 Test: blockdev writev readv block ...passed 00:16:26.981 Test: blockdev writev readv size > 128k ...passed 00:16:26.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:26.981 Test: blockdev comparev and writev ...[2024-11-18 15:02:50.474858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:26.981 [2024-11-18 15:02:50.474927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.474954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:26.981 [2024-11-18 15:02:50.474963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.475362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:26.981 [2024-11-18 15:02:50.475385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.475401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:26.981 [2024-11-18 15:02:50.475411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.475731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:26.981 [2024-11-18 15:02:50.475753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.475769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:26.981 [2024-11-18 15:02:50.475778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.476155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:26.981 [2024-11-18 15:02:50.476178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.476193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:26.981 [2024-11-18 15:02:50.476203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:26.981 passed 00:16:26.981 Test: blockdev nvme passthru rw ...passed 00:16:26.981 Test: blockdev nvme passthru vendor specific ...[2024-11-18 15:02:50.560649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:26.981 [2024-11-18 15:02:50.560678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.560863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:26.981 [2024-11-18 15:02:50.560879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.560994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:26.981 [2024-11-18 15:02:50.561017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:26.981 [2024-11-18 15:02:50.561126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:26.981 [2024-11-18 15:02:50.561149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:26.981 passed 00:16:27.239 Test: blockdev nvme admin passthru ...passed 00:16:27.239 Test: blockdev copy ...passed 00:16:27.239 00:16:27.239 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.239 suites 1 1 n/a 0 0 00:16:27.239 tests 23 23 23 0 0 00:16:27.239 asserts 152 152 152 0 n/a 00:16:27.239 00:16:27.239 Elapsed time = 0.932 seconds 00:16:27.498 15:02:51 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.498 15:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.498 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:27.498 15:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.498 15:02:51 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:27.498 15:02:51 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:27.498 15:02:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:27.498 15:02:51 -- nvmf/common.sh@116 -- # sync 00:16:27.756 15:02:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:27.756 15:02:51 -- nvmf/common.sh@119 -- # set +e 00:16:27.757 15:02:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:27.757 15:02:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:27.757 rmmod nvme_tcp 00:16:27.757 rmmod nvme_fabrics 00:16:27.757 rmmod nvme_keyring 00:16:27.757 15:02:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:27.757 15:02:51 -- nvmf/common.sh@123 -- # set -e 00:16:27.757 15:02:51 -- nvmf/common.sh@124 -- # return 0 00:16:27.757 15:02:51 -- nvmf/common.sh@477 -- # '[' -n 88181 ']' 00:16:27.757 15:02:51 -- nvmf/common.sh@478 -- # killprocess 88181 00:16:27.757 15:02:51 -- common/autotest_common.sh@936 -- # '[' -z 88181 ']' 00:16:27.757 15:02:51 -- common/autotest_common.sh@940 -- # kill -0 88181 00:16:27.757 15:02:51 -- common/autotest_common.sh@941 -- # uname 00:16:27.757 15:02:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:27.757 15:02:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88181 00:16:27.757 killing process with pid 88181 00:16:27.757 15:02:51 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:27.757 15:02:51 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:27.757 15:02:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88181' 00:16:27.757 15:02:51 -- common/autotest_common.sh@955 -- # kill 88181 00:16:27.757 15:02:51 -- common/autotest_common.sh@960 -- # wait 88181 00:16:28.015 15:02:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:28.015 15:02:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:28.015 15:02:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:28.015 15:02:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.015 15:02:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:28.015 15:02:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.015 15:02:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.015 15:02:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.273 15:02:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:28.273 00:16:28.273 real 0m3.757s 00:16:28.273 user 0m13.280s 00:16:28.273 sys 0m1.485s 00:16:28.273 15:02:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:28.273 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:28.273 ************************************ 00:16:28.273 END TEST nvmf_bdevio_no_huge 00:16:28.273 ************************************ 00:16:28.274 15:02:51 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:28.274 15:02:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:28.274 15:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.274 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:28.274 ************************************ 00:16:28.274 START TEST nvmf_tls 00:16:28.274 ************************************ 00:16:28.274 15:02:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:28.274 * Looking for test storage... 00:16:28.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:28.274 15:02:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:28.274 15:02:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:28.274 15:02:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:28.274 15:02:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:28.274 15:02:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:28.274 15:02:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:28.274 15:02:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:28.274 15:02:51 -- scripts/common.sh@335 -- # IFS=.-: 00:16:28.274 15:02:51 -- scripts/common.sh@335 -- # read -ra ver1 00:16:28.274 15:02:51 -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.274 15:02:51 -- scripts/common.sh@336 -- # read -ra ver2 00:16:28.274 15:02:51 -- scripts/common.sh@337 -- # local 'op=<' 00:16:28.274 15:02:51 -- scripts/common.sh@339 -- # ver1_l=2 00:16:28.274 15:02:51 -- scripts/common.sh@340 -- # ver2_l=1 00:16:28.274 15:02:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:28.274 15:02:51 -- scripts/common.sh@343 -- # case "$op" in 00:16:28.274 15:02:51 -- scripts/common.sh@344 -- # : 1 00:16:28.274 15:02:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:28.274 15:02:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.274 15:02:51 -- scripts/common.sh@364 -- # decimal 1 00:16:28.274 15:02:51 -- scripts/common.sh@352 -- # local d=1 00:16:28.274 15:02:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.274 15:02:51 -- scripts/common.sh@354 -- # echo 1 00:16:28.274 15:02:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:28.274 15:02:51 -- scripts/common.sh@365 -- # decimal 2 00:16:28.274 15:02:51 -- scripts/common.sh@352 -- # local d=2 00:16:28.274 15:02:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.274 15:02:51 -- scripts/common.sh@354 -- # echo 2 00:16:28.274 15:02:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:28.274 15:02:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:28.274 15:02:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:28.274 15:02:51 -- scripts/common.sh@367 -- # return 0 00:16:28.274 15:02:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.274 15:02:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:28.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.274 --rc genhtml_branch_coverage=1 00:16:28.274 --rc genhtml_function_coverage=1 00:16:28.274 --rc genhtml_legend=1 00:16:28.274 --rc geninfo_all_blocks=1 00:16:28.274 --rc geninfo_unexecuted_blocks=1 00:16:28.274 00:16:28.274 ' 00:16:28.274 15:02:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:28.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.274 --rc genhtml_branch_coverage=1 00:16:28.274 --rc genhtml_function_coverage=1 00:16:28.274 --rc genhtml_legend=1 00:16:28.274 --rc geninfo_all_blocks=1 00:16:28.274 --rc geninfo_unexecuted_blocks=1 00:16:28.274 00:16:28.274 ' 00:16:28.274 15:02:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:28.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.274 --rc genhtml_branch_coverage=1 00:16:28.274 --rc genhtml_function_coverage=1 00:16:28.274 --rc genhtml_legend=1 00:16:28.274 --rc geninfo_all_blocks=1 00:16:28.274 --rc geninfo_unexecuted_blocks=1 00:16:28.274 00:16:28.274 ' 00:16:28.274 15:02:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:28.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.274 --rc genhtml_branch_coverage=1 00:16:28.274 --rc genhtml_function_coverage=1 00:16:28.274 --rc genhtml_legend=1 00:16:28.274 --rc geninfo_all_blocks=1 00:16:28.274 --rc geninfo_unexecuted_blocks=1 00:16:28.274 00:16:28.274 ' 00:16:28.274 15:02:51 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.274 15:02:51 -- nvmf/common.sh@7 -- # uname -s 00:16:28.274 15:02:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.274 15:02:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.274 15:02:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.274 15:02:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.274 15:02:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.274 15:02:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.274 15:02:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.274 15:02:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.274 15:02:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.274 15:02:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.274 15:02:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:16:28.274 15:02:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:16:28.274 15:02:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.274 15:02:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.274 15:02:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.274 15:02:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.534 15:02:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.534 15:02:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.534 15:02:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.534 15:02:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.534 15:02:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.534 15:02:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.534 15:02:51 -- paths/export.sh@5 -- # export PATH 00:16:28.534 15:02:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.534 15:02:51 -- nvmf/common.sh@46 -- # : 0 00:16:28.534 15:02:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:28.534 15:02:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:28.534 15:02:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:28.534 15:02:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.534 15:02:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.534 15:02:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:28.534 15:02:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:28.534 15:02:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:28.534 15:02:51 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:28.534 15:02:51 -- target/tls.sh@71 -- # nvmftestinit 00:16:28.534 15:02:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:28.534 15:02:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.534 15:02:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:28.534 15:02:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:28.534 15:02:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:28.534 15:02:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.534 15:02:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.534 15:02:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.534 15:02:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:28.534 15:02:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:28.534 15:02:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:28.534 15:02:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:28.534 15:02:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:28.534 15:02:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:28.534 15:02:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.534 15:02:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.534 15:02:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:28.534 15:02:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:28.534 15:02:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.534 15:02:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.534 15:02:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.534 15:02:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.534 15:02:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.534 15:02:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.534 15:02:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.534 15:02:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.534 15:02:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:28.534 15:02:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:28.534 Cannot find device "nvmf_tgt_br" 00:16:28.534 15:02:51 -- nvmf/common.sh@154 -- # true 00:16:28.534 15:02:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.534 Cannot find device "nvmf_tgt_br2" 00:16:28.534 15:02:51 -- nvmf/common.sh@155 -- # true 00:16:28.534 15:02:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:28.534 15:02:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:28.534 Cannot find device "nvmf_tgt_br" 00:16:28.534 15:02:51 -- nvmf/common.sh@157 -- # true 00:16:28.534 15:02:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:28.534 Cannot find device "nvmf_tgt_br2" 00:16:28.534 15:02:51 -- nvmf/common.sh@158 -- # true 00:16:28.534 15:02:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:28.534 15:02:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:28.534 15:02:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.534 15:02:52 -- nvmf/common.sh@161 -- # true 00:16:28.534 15:02:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.534 15:02:52 -- nvmf/common.sh@162 -- # true 00:16:28.534 15:02:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.534 15:02:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.534 15:02:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.534 15:02:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.534 15:02:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.534 15:02:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.534 15:02:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.534 15:02:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:28.534 15:02:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:28.534 15:02:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:28.534 15:02:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:28.534 15:02:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:28.794 15:02:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:28.794 15:02:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.794 15:02:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.794 15:02:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.794 15:02:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:28.794 15:02:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:28.794 15:02:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.794 15:02:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.794 15:02:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.794 15:02:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.794 15:02:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.794 15:02:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:28.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:16:28.794 00:16:28.794 --- 10.0.0.2 ping statistics --- 00:16:28.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.794 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:28.794 15:02:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:28.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:28.794 00:16:28.794 --- 10.0.0.3 ping statistics --- 00:16:28.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.794 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:28.794 15:02:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:16:28.794 00:16:28.794 --- 10.0.0.1 ping statistics --- 00:16:28.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.794 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:16:28.794 15:02:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.794 15:02:52 -- nvmf/common.sh@421 -- # return 0 00:16:28.794 15:02:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:28.794 15:02:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.794 15:02:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:28.794 15:02:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:28.794 15:02:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.794 15:02:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:28.794 15:02:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:28.794 15:02:52 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:28.794 15:02:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:28.794 15:02:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.794 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:16:28.794 15:02:52 -- nvmf/common.sh@469 -- # nvmfpid=88423 00:16:28.794 15:02:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:28.794 15:02:52 -- nvmf/common.sh@470 -- # waitforlisten 88423 00:16:28.794 15:02:52 -- common/autotest_common.sh@829 -- # '[' -z 88423 ']' 00:16:28.794 15:02:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.794 15:02:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.794 15:02:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.794 15:02:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.794 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:16:28.794 [2024-11-18 15:02:52.306349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:28.794 [2024-11-18 15:02:52.306453] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.053 [2024-11-18 15:02:52.451164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.053 [2024-11-18 15:02:52.529531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:29.053 [2024-11-18 15:02:52.529726] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.053 [2024-11-18 15:02:52.529744] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.053 [2024-11-18 15:02:52.529756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.053 [2024-11-18 15:02:52.529790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.620 15:02:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.620 15:02:53 -- common/autotest_common.sh@862 -- # return 0 00:16:29.620 15:02:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:29.620 15:02:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:29.620 15:02:53 -- common/autotest_common.sh@10 -- # set +x 00:16:29.620 15:02:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.620 15:02:53 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:29.620 15:02:53 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:29.879 true 00:16:29.879 15:02:53 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:29.879 15:02:53 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:30.137 15:02:53 -- target/tls.sh@82 -- # version=0 00:16:30.137 15:02:53 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:30.137 15:02:53 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:30.395 15:02:53 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:30.395 15:02:53 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:30.654 15:02:54 -- target/tls.sh@90 -- # version=13 00:16:30.654 15:02:54 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:30.654 15:02:54 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:30.911 15:02:54 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:30.911 15:02:54 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:31.169 15:02:54 -- target/tls.sh@98 -- # version=7 00:16:31.169 15:02:54 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:31.169 15:02:54 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:31.169 15:02:54 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:31.450 15:02:54 -- target/tls.sh@105 -- # ktls=false 00:16:31.450 15:02:54 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:31.450 15:02:54 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:31.450 15:02:55 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:31.450 15:02:55 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:31.721 15:02:55 -- target/tls.sh@113 -- # ktls=true 00:16:31.721 15:02:55 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:31.721 15:02:55 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:31.979 15:02:55 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:31.979 15:02:55 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:32.238 15:02:55 -- target/tls.sh@121 -- # ktls=false 00:16:32.238 15:02:55 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:32.238 15:02:55 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:32.238 15:02:55 -- target/tls.sh@49 -- # local key hash crc 00:16:32.238 15:02:55 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:32.238 15:02:55 -- target/tls.sh@51 -- # hash=01 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # gzip -1 -c 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # head -c 4 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # tail -c8 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # crc='p$H�' 00:16:32.238 15:02:55 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:32.238 15:02:55 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:32.238 15:02:55 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:32.238 15:02:55 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:32.238 15:02:55 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:32.238 15:02:55 -- target/tls.sh@49 -- # local key hash crc 00:16:32.238 15:02:55 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:32.238 15:02:55 -- target/tls.sh@51 -- # hash=01 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # gzip -1 -c 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # tail -c8 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # head -c 4 00:16:32.238 15:02:55 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:32.238 15:02:55 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:32.238 15:02:55 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:32.238 15:02:55 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:32.238 15:02:55 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:32.238 15:02:55 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.238 15:02:55 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:32.238 15:02:55 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:32.238 15:02:55 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:32.238 15:02:55 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.238 15:02:55 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:32.238 15:02:55 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:32.496 15:02:56 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:33.064 15:02:56 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:33.064 15:02:56 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:33.064 15:02:56 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:33.064 [2024-11-18 15:02:56.606395] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.064 15:02:56 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:33.323 15:02:56 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:33.582 [2024-11-18 15:02:57.058442] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:33.582 [2024-11-18 15:02:57.058667] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.582 15:02:57 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:33.840 malloc0 00:16:33.840 15:02:57 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:34.099 15:02:57 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:34.357 15:02:57 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.567 Initializing NVMe Controllers 00:16:46.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:46.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:46.567 Initialization complete. Launching workers. 00:16:46.567 ======================================================== 00:16:46.567 Latency(us) 00:16:46.567 Device Information : IOPS MiB/s Average min max 00:16:46.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11618.28 45.38 5509.54 1606.54 11240.03 00:16:46.567 ======================================================== 00:16:46.567 Total : 11618.28 45.38 5509.54 1606.54 11240.03 00:16:46.567 00:16:46.567 15:03:07 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.567 15:03:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:46.567 15:03:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:46.567 15:03:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:46.567 15:03:07 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:46.567 15:03:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.567 15:03:07 -- target/tls.sh@28 -- # bdevperf_pid=88795 00:16:46.567 15:03:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.567 15:03:07 -- target/tls.sh@31 -- # waitforlisten 88795 /var/tmp/bdevperf.sock 00:16:46.568 15:03:07 -- common/autotest_common.sh@829 -- # '[' -z 88795 ']' 00:16:46.568 15:03:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.568 15:03:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.568 15:03:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.568 15:03:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:46.568 15:03:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.568 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:16:46.568 [2024-11-18 15:03:08.013444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:46.568 [2024-11-18 15:03:08.013535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88795 ] 00:16:46.568 [2024-11-18 15:03:08.154215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.568 [2024-11-18 15:03:08.230569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.568 15:03:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.568 15:03:08 -- common/autotest_common.sh@862 -- # return 0 00:16:46.568 15:03:08 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:46.568 [2024-11-18 15:03:09.178183] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:46.568 TLSTESTn1 00:16:46.568 15:03:09 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:46.568 Running I/O for 10 seconds... 00:16:56.544 00:16:56.544 Latency(us) 00:16:56.544 [2024-11-18T15:03:20.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.544 [2024-11-18T15:03:20.134Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:56.544 Verification LBA range: start 0x0 length 0x2000 00:16:56.544 TLSTESTn1 : 10.02 6386.15 24.95 0.00 0.00 20010.37 4319.42 27882.59 00:16:56.544 [2024-11-18T15:03:20.134Z] =================================================================================================================== 00:16:56.544 [2024-11-18T15:03:20.134Z] Total : 6386.15 24.95 0.00 0.00 20010.37 4319.42 27882.59 00:16:56.544 0 00:16:56.544 15:03:19 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.544 15:03:19 -- target/tls.sh@45 -- # killprocess 88795 00:16:56.544 15:03:19 -- common/autotest_common.sh@936 -- # '[' -z 88795 ']' 00:16:56.544 15:03:19 -- common/autotest_common.sh@940 -- # kill -0 88795 00:16:56.544 15:03:19 -- common/autotest_common.sh@941 -- # uname 00:16:56.544 15:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.544 15:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88795 00:16:56.544 15:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:56.544 15:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:56.544 15:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88795' 00:16:56.544 killing process with pid 88795 00:16:56.544 15:03:19 -- common/autotest_common.sh@955 -- # kill 88795 00:16:56.544 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.544 00:16:56.544 Latency(us) 00:16:56.544 [2024-11-18T15:03:20.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.544 [2024-11-18T15:03:20.134Z] =================================================================================================================== 00:16:56.544 [2024-11-18T15:03:20.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.544 15:03:19 -- common/autotest_common.sh@960 -- # wait 88795 00:16:56.544 15:03:19 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:56.544 15:03:19 -- common/autotest_common.sh@650 -- # local es=0 00:16:56.544 15:03:19 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:56.544 15:03:19 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:56.544 15:03:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.544 15:03:19 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:56.544 15:03:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.544 15:03:19 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:56.544 15:03:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:56.544 15:03:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:56.544 15:03:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:56.544 15:03:19 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:56.545 15:03:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.545 15:03:19 -- target/tls.sh@28 -- # bdevperf_pid=88942 00:16:56.545 15:03:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:56.545 15:03:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:56.545 15:03:19 -- target/tls.sh@31 -- # waitforlisten 88942 /var/tmp/bdevperf.sock 00:16:56.545 15:03:19 -- common/autotest_common.sh@829 -- # '[' -z 88942 ']' 00:16:56.545 15:03:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.545 15:03:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.545 15:03:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.545 15:03:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.545 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:16:56.545 [2024-11-18 15:03:19.732606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:56.545 [2024-11-18 15:03:19.732699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88942 ] 00:16:56.545 [2024-11-18 15:03:19.866780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.545 [2024-11-18 15:03:19.923337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.112 15:03:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.112 15:03:20 -- common/autotest_common.sh@862 -- # return 0 00:16:57.112 15:03:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:57.370 [2024-11-18 15:03:20.872214] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.370 [2024-11-18 15:03:20.882043] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:57.370 [2024-11-18 15:03:20.882547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8cc0 (107): Transport endpoint is not connected 00:16:57.370 [2024-11-18 15:03:20.883535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8cc0 (9): Bad file descriptor 00:16:57.370 [2024-11-18 15:03:20.884531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:57.370 [2024-11-18 15:03:20.884552] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:57.370 [2024-11-18 15:03:20.884562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:57.370 2024/11/18 15:03:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:57.370 request: 00:16:57.370 { 00:16:57.370 "method": "bdev_nvme_attach_controller", 00:16:57.370 "params": { 00:16:57.370 "name": "TLSTEST", 00:16:57.370 "trtype": "tcp", 00:16:57.370 "traddr": "10.0.0.2", 00:16:57.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.370 "adrfam": "ipv4", 00:16:57.370 "trsvcid": "4420", 00:16:57.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.370 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:57.370 } 00:16:57.370 } 00:16:57.370 Got JSON-RPC error response 00:16:57.370 GoRPCClient: error on JSON-RPC call 00:16:57.370 15:03:20 -- target/tls.sh@36 -- # killprocess 88942 00:16:57.370 15:03:20 -- common/autotest_common.sh@936 -- # '[' -z 88942 ']' 00:16:57.370 15:03:20 -- common/autotest_common.sh@940 -- # kill -0 88942 00:16:57.370 15:03:20 -- common/autotest_common.sh@941 -- # uname 00:16:57.370 15:03:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.370 15:03:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88942 00:16:57.370 15:03:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:57.370 15:03:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:57.370 killing process with pid 88942 00:16:57.370 15:03:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88942' 00:16:57.370 15:03:20 -- common/autotest_common.sh@955 -- # kill 88942 00:16:57.370 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.370 00:16:57.370 Latency(us) 00:16:57.370 [2024-11-18T15:03:20.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.370 [2024-11-18T15:03:20.960Z] =================================================================================================================== 00:16:57.370 [2024-11-18T15:03:20.961Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:57.371 15:03:20 -- common/autotest_common.sh@960 -- # wait 88942 00:16:57.630 15:03:21 -- target/tls.sh@37 -- # return 1 00:16:57.630 15:03:21 -- common/autotest_common.sh@653 -- # es=1 00:16:57.630 15:03:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.630 15:03:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.630 15:03:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.630 15:03:21 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:57.630 15:03:21 -- common/autotest_common.sh@650 -- # local es=0 00:16:57.630 15:03:21 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:57.630 15:03:21 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:57.630 15:03:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.630 15:03:21 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:57.630 15:03:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.630 15:03:21 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:57.630 15:03:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:57.630 15:03:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:57.630 15:03:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:57.630 15:03:21 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:57.630 15:03:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.630 15:03:21 -- target/tls.sh@28 -- # bdevperf_pid=88988 00:16:57.630 15:03:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.630 15:03:21 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:57.630 15:03:21 -- target/tls.sh@31 -- # waitforlisten 88988 /var/tmp/bdevperf.sock 00:16:57.630 15:03:21 -- common/autotest_common.sh@829 -- # '[' -z 88988 ']' 00:16:57.630 15:03:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.630 15:03:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.630 15:03:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.630 15:03:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.630 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:57.890 [2024-11-18 15:03:21.228001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.890 [2024-11-18 15:03:21.228099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88988 ] 00:16:57.890 [2024-11-18 15:03:21.368724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.890 [2024-11-18 15:03:21.424764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.825 15:03:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.825 15:03:22 -- common/autotest_common.sh@862 -- # return 0 00:16:58.825 15:03:22 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.085 [2024-11-18 15:03:22.455393] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.085 [2024-11-18 15:03:22.461957] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:59.085 [2024-11-18 15:03:22.461997] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:59.085 [2024-11-18 15:03:22.462067] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:59.085 [2024-11-18 15:03:22.462777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d4cc0 (107): Transport endpoint is not connected 00:16:59.085 [2024-11-18 15:03:22.463762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d4cc0 (9): Bad file descriptor 00:16:59.085 [2024-11-18 15:03:22.464760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:59.085 [2024-11-18 15:03:22.464782] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:59.085 [2024-11-18 15:03:22.464791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.085 2024/11/18 15:03:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:59.085 request: 00:16:59.085 { 00:16:59.085 "method": "bdev_nvme_attach_controller", 00:16:59.085 "params": { 00:16:59.085 "name": "TLSTEST", 00:16:59.085 "trtype": "tcp", 00:16:59.085 "traddr": "10.0.0.2", 00:16:59.085 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:59.085 "adrfam": "ipv4", 00:16:59.085 "trsvcid": "4420", 00:16:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.085 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:59.085 } 00:16:59.085 } 00:16:59.085 Got JSON-RPC error response 00:16:59.085 GoRPCClient: error on JSON-RPC call 00:16:59.085 15:03:22 -- target/tls.sh@36 -- # killprocess 88988 00:16:59.085 15:03:22 -- common/autotest_common.sh@936 -- # '[' -z 88988 ']' 00:16:59.085 15:03:22 -- common/autotest_common.sh@940 -- # kill -0 88988 00:16:59.085 15:03:22 -- common/autotest_common.sh@941 -- # uname 00:16:59.085 15:03:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:59.085 15:03:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88988 00:16:59.085 15:03:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:59.085 15:03:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:59.085 killing process with pid 88988 00:16:59.085 15:03:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88988' 00:16:59.085 Received shutdown signal, test time was about 10.000000 seconds 00:16:59.085 00:16:59.085 Latency(us) 00:16:59.085 [2024-11-18T15:03:22.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.085 [2024-11-18T15:03:22.676Z] =================================================================================================================== 00:16:59.086 [2024-11-18T15:03:22.676Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:59.086 15:03:22 -- common/autotest_common.sh@955 -- # kill 88988 00:16:59.086 15:03:22 -- common/autotest_common.sh@960 -- # wait 88988 00:16:59.345 15:03:22 -- target/tls.sh@37 -- # return 1 00:16:59.345 15:03:22 -- common/autotest_common.sh@653 -- # es=1 00:16:59.345 15:03:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.345 15:03:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.345 15:03:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.345 15:03:22 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.345 15:03:22 -- common/autotest_common.sh@650 -- # local es=0 00:16:59.345 15:03:22 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.345 15:03:22 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:59.345 15:03:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.345 15:03:22 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:59.345 15:03:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.345 15:03:22 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:59.345 15:03:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.345 15:03:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:59.345 15:03:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:59.345 15:03:22 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:59.345 15:03:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.345 15:03:22 -- target/tls.sh@28 -- # bdevperf_pid=89033 00:16:59.345 15:03:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.345 15:03:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.345 15:03:22 -- target/tls.sh@31 -- # waitforlisten 89033 /var/tmp/bdevperf.sock 00:16:59.345 15:03:22 -- common/autotest_common.sh@829 -- # '[' -z 89033 ']' 00:16:59.345 15:03:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.345 15:03:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.345 15:03:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.345 15:03:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.345 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:16:59.345 [2024-11-18 15:03:22.801546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:59.345 [2024-11-18 15:03:22.801634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89033 ] 00:16:59.345 [2024-11-18 15:03:22.929635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.603 [2024-11-18 15:03:22.994857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.170 15:03:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.170 15:03:23 -- common/autotest_common.sh@862 -- # return 0 00:17:00.170 15:03:23 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:00.428 [2024-11-18 15:03:23.991149] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.428 [2024-11-18 15:03:24.002107] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:00.428 [2024-11-18 15:03:24.002146] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:00.428 [2024-11-18 15:03:24.002219] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:00.428 [2024-11-18 15:03:24.002439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978cc0 (107): Transport endpoint is not connected 00:17:00.428 [2024-11-18 15:03:24.003406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978cc0 (9): Bad file descriptor 00:17:00.428 [2024-11-18 15:03:24.004402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:00.428 [2024-11-18 15:03:24.004422] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:00.428 [2024-11-18 15:03:24.004439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:00.428 2024/11/18 15:03:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:00.428 request: 00:17:00.428 { 00:17:00.428 "method": "bdev_nvme_attach_controller", 00:17:00.428 "params": { 00:17:00.428 "name": "TLSTEST", 00:17:00.428 "trtype": "tcp", 00:17:00.428 "traddr": "10.0.0.2", 00:17:00.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:00.428 "adrfam": "ipv4", 00:17:00.428 "trsvcid": "4420", 00:17:00.428 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:00.428 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:00.428 } 00:17:00.428 } 00:17:00.428 Got JSON-RPC error response 00:17:00.428 GoRPCClient: error on JSON-RPC call 00:17:00.687 15:03:24 -- target/tls.sh@36 -- # killprocess 89033 00:17:00.687 15:03:24 -- common/autotest_common.sh@936 -- # '[' -z 89033 ']' 00:17:00.687 15:03:24 -- common/autotest_common.sh@940 -- # kill -0 89033 00:17:00.687 15:03:24 -- common/autotest_common.sh@941 -- # uname 00:17:00.687 15:03:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.687 15:03:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89033 00:17:00.687 killing process with pid 89033 00:17:00.687 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.687 00:17:00.687 Latency(us) 00:17:00.687 [2024-11-18T15:03:24.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.687 [2024-11-18T15:03:24.277Z] =================================================================================================================== 00:17:00.687 [2024-11-18T15:03:24.277Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:00.687 15:03:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:00.687 15:03:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:00.687 15:03:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89033' 00:17:00.687 15:03:24 -- common/autotest_common.sh@955 -- # kill 89033 00:17:00.687 15:03:24 -- common/autotest_common.sh@960 -- # wait 89033 00:17:00.946 15:03:24 -- target/tls.sh@37 -- # return 1 00:17:00.946 15:03:24 -- common/autotest_common.sh@653 -- # es=1 00:17:00.946 15:03:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.946 15:03:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.946 15:03:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.946 15:03:24 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:00.946 15:03:24 -- common/autotest_common.sh@650 -- # local es=0 00:17:00.946 15:03:24 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:00.946 15:03:24 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:00.946 15:03:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.946 15:03:24 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:00.946 15:03:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.946 15:03:24 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:00.946 15:03:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:00.946 15:03:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:00.946 15:03:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:00.946 15:03:24 -- target/tls.sh@23 -- # psk= 00:17:00.946 15:03:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.946 15:03:24 -- target/tls.sh@28 -- # bdevperf_pid=89079 00:17:00.946 15:03:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.946 15:03:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.946 15:03:24 -- target/tls.sh@31 -- # waitforlisten 89079 /var/tmp/bdevperf.sock 00:17:00.946 15:03:24 -- common/autotest_common.sh@829 -- # '[' -z 89079 ']' 00:17:00.946 15:03:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.946 15:03:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.946 15:03:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.946 15:03:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.946 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:17:00.946 [2024-11-18 15:03:24.344408] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:00.946 [2024-11-18 15:03:24.344698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89079 ] 00:17:00.946 [2024-11-18 15:03:24.482646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.204 [2024-11-18 15:03:24.540384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.769 15:03:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.769 15:03:25 -- common/autotest_common.sh@862 -- # return 0 00:17:01.769 15:03:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:02.027 [2024-11-18 15:03:25.572405] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:02.027 [2024-11-18 15:03:25.574202] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c718c0 (9): Bad file descriptor 00:17:02.027 [2024-11-18 15:03:25.575197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:02.027 [2024-11-18 15:03:25.575220] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:02.027 [2024-11-18 15:03:25.575231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:02.027 2024/11/18 15:03:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:02.027 request: 00:17:02.027 { 00:17:02.027 "method": "bdev_nvme_attach_controller", 00:17:02.027 "params": { 00:17:02.027 "name": "TLSTEST", 00:17:02.027 "trtype": "tcp", 00:17:02.027 "traddr": "10.0.0.2", 00:17:02.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.027 "adrfam": "ipv4", 00:17:02.027 "trsvcid": "4420", 00:17:02.027 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:02.027 } 00:17:02.027 } 00:17:02.027 Got JSON-RPC error response 00:17:02.027 GoRPCClient: error on JSON-RPC call 00:17:02.027 15:03:25 -- target/tls.sh@36 -- # killprocess 89079 00:17:02.027 15:03:25 -- common/autotest_common.sh@936 -- # '[' -z 89079 ']' 00:17:02.027 15:03:25 -- common/autotest_common.sh@940 -- # kill -0 89079 00:17:02.027 15:03:25 -- common/autotest_common.sh@941 -- # uname 00:17:02.027 15:03:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.027 15:03:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89079 00:17:02.287 killing process with pid 89079 00:17:02.287 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.287 00:17:02.287 Latency(us) 00:17:02.287 [2024-11-18T15:03:25.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.287 [2024-11-18T15:03:25.877Z] =================================================================================================================== 00:17:02.287 [2024-11-18T15:03:25.877Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.287 15:03:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:02.287 15:03:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:02.287 15:03:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89079' 00:17:02.287 15:03:25 -- common/autotest_common.sh@955 -- # kill 89079 00:17:02.287 15:03:25 -- common/autotest_common.sh@960 -- # wait 89079 00:17:02.287 15:03:25 -- target/tls.sh@37 -- # return 1 00:17:02.287 15:03:25 -- common/autotest_common.sh@653 -- # es=1 00:17:02.287 15:03:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.287 15:03:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.287 15:03:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.287 15:03:25 -- target/tls.sh@167 -- # killprocess 88423 00:17:02.287 15:03:25 -- common/autotest_common.sh@936 -- # '[' -z 88423 ']' 00:17:02.287 15:03:25 -- common/autotest_common.sh@940 -- # kill -0 88423 00:17:02.287 15:03:25 -- common/autotest_common.sh@941 -- # uname 00:17:02.287 15:03:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.287 15:03:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88423 00:17:02.287 killing process with pid 88423 00:17:02.287 15:03:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:02.287 15:03:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:02.287 15:03:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88423' 00:17:02.287 15:03:25 -- common/autotest_common.sh@955 -- # kill 88423 00:17:02.287 15:03:25 -- common/autotest_common.sh@960 -- # wait 88423 00:17:02.546 15:03:26 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:02.546 15:03:26 -- target/tls.sh@49 -- # local key hash crc 00:17:02.546 15:03:26 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:02.546 15:03:26 -- target/tls.sh@51 -- # hash=02 00:17:02.546 15:03:26 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:02.546 15:03:26 -- target/tls.sh@52 -- # gzip -1 -c 00:17:02.546 15:03:26 -- target/tls.sh@52 -- # tail -c8 00:17:02.546 15:03:26 -- target/tls.sh@52 -- # head -c 4 00:17:02.546 15:03:26 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:02.546 15:03:26 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:02.546 15:03:26 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:02.546 15:03:26 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:02.546 15:03:26 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:02.546 15:03:26 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:02.546 15:03:26 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:02.546 15:03:26 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:02.546 15:03:26 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:02.546 15:03:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:02.546 15:03:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.546 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:17:02.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.546 15:03:26 -- nvmf/common.sh@469 -- # nvmfpid=89144 00:17:02.546 15:03:26 -- nvmf/common.sh@470 -- # waitforlisten 89144 00:17:02.546 15:03:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:02.546 15:03:26 -- common/autotest_common.sh@829 -- # '[' -z 89144 ']' 00:17:02.546 15:03:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.546 15:03:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.546 15:03:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.546 15:03:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.546 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:17:02.813 [2024-11-18 15:03:26.163471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:02.813 [2024-11-18 15:03:26.163679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.813 [2024-11-18 15:03:26.292375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.813 [2024-11-18 15:03:26.357418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:02.813 [2024-11-18 15:03:26.357566] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.813 [2024-11-18 15:03:26.357578] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.813 [2024-11-18 15:03:26.357585] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.813 [2024-11-18 15:03:26.357618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.753 15:03:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.753 15:03:27 -- common/autotest_common.sh@862 -- # return 0 00:17:03.753 15:03:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:03.753 15:03:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.753 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:17:03.753 15:03:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.753 15:03:27 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:03.753 15:03:27 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:03.753 15:03:27 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:04.011 [2024-11-18 15:03:27.395116] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.011 15:03:27 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:04.269 15:03:27 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:04.527 [2024-11-18 15:03:27.899192] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:04.527 [2024-11-18 15:03:27.899424] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.527 15:03:27 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:04.786 malloc0 00:17:04.786 15:03:28 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:04.786 15:03:28 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.045 15:03:28 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.045 15:03:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.045 15:03:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.045 15:03:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.045 15:03:28 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:05.045 15:03:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.045 15:03:28 -- target/tls.sh@28 -- # bdevperf_pid=89242 00:17:05.045 15:03:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.045 15:03:28 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.045 15:03:28 -- target/tls.sh@31 -- # waitforlisten 89242 /var/tmp/bdevperf.sock 00:17:05.045 15:03:28 -- common/autotest_common.sh@829 -- # '[' -z 89242 ']' 00:17:05.045 15:03:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.045 15:03:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.045 15:03:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.045 15:03:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.045 15:03:28 -- common/autotest_common.sh@10 -- # set +x 00:17:05.045 [2024-11-18 15:03:28.592951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:05.045 [2024-11-18 15:03:28.593046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89242 ] 00:17:05.305 [2024-11-18 15:03:28.730171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.305 [2024-11-18 15:03:28.786333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.266 15:03:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.266 15:03:29 -- common/autotest_common.sh@862 -- # return 0 00:17:06.266 15:03:29 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:06.266 [2024-11-18 15:03:29.760501] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.266 TLSTESTn1 00:17:06.266 15:03:29 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:06.524 Running I/O for 10 seconds... 00:17:16.494 00:17:16.494 Latency(us) 00:17:16.494 [2024-11-18T15:03:40.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.494 [2024-11-18T15:03:40.084Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:16.494 Verification LBA range: start 0x0 length 0x2000 00:17:16.494 TLSTESTn1 : 10.01 6596.73 25.77 0.00 0.00 19374.79 3961.95 21686.46 00:17:16.494 [2024-11-18T15:03:40.084Z] =================================================================================================================== 00:17:16.494 [2024-11-18T15:03:40.084Z] Total : 6596.73 25.77 0.00 0.00 19374.79 3961.95 21686.46 00:17:16.494 0 00:17:16.494 15:03:39 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.494 15:03:39 -- target/tls.sh@45 -- # killprocess 89242 00:17:16.494 15:03:39 -- common/autotest_common.sh@936 -- # '[' -z 89242 ']' 00:17:16.494 15:03:39 -- common/autotest_common.sh@940 -- # kill -0 89242 00:17:16.494 15:03:39 -- common/autotest_common.sh@941 -- # uname 00:17:16.494 15:03:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.494 15:03:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89242 00:17:16.494 15:03:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:16.494 15:03:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:16.494 15:03:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89242' 00:17:16.494 killing process with pid 89242 00:17:16.494 15:03:40 -- common/autotest_common.sh@955 -- # kill 89242 00:17:16.494 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.494 00:17:16.494 Latency(us) 00:17:16.494 [2024-11-18T15:03:40.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.494 [2024-11-18T15:03:40.084Z] =================================================================================================================== 00:17:16.494 [2024-11-18T15:03:40.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.494 15:03:40 -- common/autotest_common.sh@960 -- # wait 89242 00:17:16.752 15:03:40 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.752 15:03:40 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.752 15:03:40 -- common/autotest_common.sh@650 -- # local es=0 00:17:16.752 15:03:40 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.752 15:03:40 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:16.752 15:03:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.752 15:03:40 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:16.752 15:03:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.752 15:03:40 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.752 15:03:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.752 15:03:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.752 15:03:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.752 15:03:40 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:16.752 15:03:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.752 15:03:40 -- target/tls.sh@28 -- # bdevperf_pid=89389 00:17:16.752 15:03:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.752 15:03:40 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.752 15:03:40 -- target/tls.sh@31 -- # waitforlisten 89389 /var/tmp/bdevperf.sock 00:17:16.752 15:03:40 -- common/autotest_common.sh@829 -- # '[' -z 89389 ']' 00:17:16.752 15:03:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.752 15:03:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.752 15:03:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.752 15:03:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.752 15:03:40 -- common/autotest_common.sh@10 -- # set +x 00:17:16.752 [2024-11-18 15:03:40.252606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:16.753 [2024-11-18 15:03:40.252716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89389 ] 00:17:17.011 [2024-11-18 15:03:40.392904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.011 [2024-11-18 15:03:40.458221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.999 15:03:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.999 15:03:41 -- common/autotest_common.sh@862 -- # return 0 00:17:17.999 15:03:41 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.999 [2024-11-18 15:03:41.405928] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.999 [2024-11-18 15:03:41.405988] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:17.999 2024/11/18 15:03:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:17.999 request: 00:17:17.999 { 00:17:17.999 "method": "bdev_nvme_attach_controller", 00:17:17.999 "params": { 00:17:17.999 "name": "TLSTEST", 00:17:17.999 "trtype": "tcp", 00:17:17.999 "traddr": "10.0.0.2", 00:17:17.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.999 "adrfam": "ipv4", 00:17:17.999 "trsvcid": "4420", 00:17:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.999 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:17.999 } 00:17:17.999 } 00:17:17.999 Got JSON-RPC error response 00:17:17.999 GoRPCClient: error on JSON-RPC call 00:17:17.999 15:03:41 -- target/tls.sh@36 -- # killprocess 89389 00:17:17.999 15:03:41 -- common/autotest_common.sh@936 -- # '[' -z 89389 ']' 00:17:17.999 15:03:41 -- common/autotest_common.sh@940 -- # kill -0 89389 00:17:17.999 15:03:41 -- common/autotest_common.sh@941 -- # uname 00:17:17.999 15:03:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.999 15:03:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89389 00:17:17.999 15:03:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:17.999 15:03:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:17.999 killing process with pid 89389 00:17:17.999 15:03:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89389' 00:17:17.999 15:03:41 -- common/autotest_common.sh@955 -- # kill 89389 00:17:17.999 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.999 00:17:17.999 Latency(us) 00:17:17.999 [2024-11-18T15:03:41.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.999 [2024-11-18T15:03:41.589Z] =================================================================================================================== 00:17:17.999 [2024-11-18T15:03:41.589Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.000 15:03:41 -- common/autotest_common.sh@960 -- # wait 89389 00:17:18.258 15:03:41 -- target/tls.sh@37 -- # return 1 00:17:18.258 15:03:41 -- common/autotest_common.sh@653 -- # es=1 00:17:18.258 15:03:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.258 15:03:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.258 15:03:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.258 15:03:41 -- target/tls.sh@183 -- # killprocess 89144 00:17:18.258 15:03:41 -- common/autotest_common.sh@936 -- # '[' -z 89144 ']' 00:17:18.258 15:03:41 -- common/autotest_common.sh@940 -- # kill -0 89144 00:17:18.258 15:03:41 -- common/autotest_common.sh@941 -- # uname 00:17:18.258 15:03:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.258 15:03:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89144 00:17:18.258 15:03:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:18.258 15:03:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:18.258 killing process with pid 89144 00:17:18.258 15:03:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89144' 00:17:18.258 15:03:41 -- common/autotest_common.sh@955 -- # kill 89144 00:17:18.258 15:03:41 -- common/autotest_common.sh@960 -- # wait 89144 00:17:18.517 15:03:41 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:18.517 15:03:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:18.517 15:03:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.517 15:03:41 -- common/autotest_common.sh@10 -- # set +x 00:17:18.517 15:03:41 -- nvmf/common.sh@469 -- # nvmfpid=89440 00:17:18.517 15:03:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.517 15:03:41 -- nvmf/common.sh@470 -- # waitforlisten 89440 00:17:18.517 15:03:41 -- common/autotest_common.sh@829 -- # '[' -z 89440 ']' 00:17:18.517 15:03:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.517 15:03:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.517 15:03:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.517 15:03:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.517 15:03:41 -- common/autotest_common.sh@10 -- # set +x 00:17:18.517 [2024-11-18 15:03:41.995406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.517 [2024-11-18 15:03:41.995503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.776 [2024-11-18 15:03:42.133947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.776 [2024-11-18 15:03:42.192400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:18.776 [2024-11-18 15:03:42.192542] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.776 [2024-11-18 15:03:42.192554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.776 [2024-11-18 15:03:42.192562] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.776 [2024-11-18 15:03:42.192588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.712 15:03:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.712 15:03:42 -- common/autotest_common.sh@862 -- # return 0 00:17:19.712 15:03:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:19.712 15:03:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.712 15:03:42 -- common/autotest_common.sh@10 -- # set +x 00:17:19.712 15:03:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.712 15:03:43 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.712 15:03:43 -- common/autotest_common.sh@650 -- # local es=0 00:17:19.712 15:03:43 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.712 15:03:43 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:19.712 15:03:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.712 15:03:43 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:19.712 15:03:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.712 15:03:43 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.712 15:03:43 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.712 15:03:43 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:19.712 [2024-11-18 15:03:43.264484] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.712 15:03:43 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:19.971 15:03:43 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:20.229 [2024-11-18 15:03:43.720563] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:20.229 [2024-11-18 15:03:43.720798] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.229 15:03:43 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:20.487 malloc0 00:17:20.487 15:03:43 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:20.747 15:03:44 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:21.006 [2024-11-18 15:03:44.374365] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:21.006 [2024-11-18 15:03:44.374400] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:21.006 [2024-11-18 15:03:44.374416] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:21.006 2024/11/18 15:03:44 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:21.006 request: 00:17:21.006 { 00:17:21.006 "method": "nvmf_subsystem_add_host", 00:17:21.006 "params": { 00:17:21.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.006 "host": "nqn.2016-06.io.spdk:host1", 00:17:21.006 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:21.006 } 00:17:21.006 } 00:17:21.006 Got JSON-RPC error response 00:17:21.006 GoRPCClient: error on JSON-RPC call 00:17:21.006 15:03:44 -- common/autotest_common.sh@653 -- # es=1 00:17:21.006 15:03:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.006 15:03:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.006 15:03:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.006 15:03:44 -- target/tls.sh@189 -- # killprocess 89440 00:17:21.006 15:03:44 -- common/autotest_common.sh@936 -- # '[' -z 89440 ']' 00:17:21.006 15:03:44 -- common/autotest_common.sh@940 -- # kill -0 89440 00:17:21.006 15:03:44 -- common/autotest_common.sh@941 -- # uname 00:17:21.006 15:03:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.006 15:03:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89440 00:17:21.006 15:03:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:21.006 killing process with pid 89440 00:17:21.006 15:03:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:21.006 15:03:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89440' 00:17:21.006 15:03:44 -- common/autotest_common.sh@955 -- # kill 89440 00:17:21.006 15:03:44 -- common/autotest_common.sh@960 -- # wait 89440 00:17:21.265 15:03:44 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:21.265 15:03:44 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:21.265 15:03:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:21.265 15:03:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:21.265 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:17:21.265 15:03:44 -- nvmf/common.sh@469 -- # nvmfpid=89556 00:17:21.265 15:03:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:21.265 15:03:44 -- nvmf/common.sh@470 -- # waitforlisten 89556 00:17:21.265 15:03:44 -- common/autotest_common.sh@829 -- # '[' -z 89556 ']' 00:17:21.265 15:03:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.265 15:03:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.265 15:03:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.265 15:03:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.265 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:17:21.265 [2024-11-18 15:03:44.734509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:21.265 [2024-11-18 15:03:44.734607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.524 [2024-11-18 15:03:44.860130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.524 [2024-11-18 15:03:44.921030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.524 [2024-11-18 15:03:44.921178] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.524 [2024-11-18 15:03:44.921190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.524 [2024-11-18 15:03:44.921198] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.524 [2024-11-18 15:03:44.921228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.460 15:03:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.460 15:03:45 -- common/autotest_common.sh@862 -- # return 0 00:17:22.460 15:03:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.460 15:03:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.460 15:03:45 -- common/autotest_common.sh@10 -- # set +x 00:17:22.460 15:03:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.460 15:03:45 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.460 15:03:45 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.460 15:03:45 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:22.460 [2024-11-18 15:03:46.028023] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.460 15:03:46 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.027 15:03:46 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:23.027 [2024-11-18 15:03:46.560121] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.027 [2024-11-18 15:03:46.560459] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.027 15:03:46 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:23.285 malloc0 00:17:23.285 15:03:46 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:23.544 15:03:47 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.803 15:03:47 -- target/tls.sh@197 -- # bdevperf_pid=89654 00:17:23.803 15:03:47 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.803 15:03:47 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.803 15:03:47 -- target/tls.sh@200 -- # waitforlisten 89654 /var/tmp/bdevperf.sock 00:17:23.803 15:03:47 -- common/autotest_common.sh@829 -- # '[' -z 89654 ']' 00:17:23.803 15:03:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.803 15:03:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.803 15:03:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.803 15:03:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.803 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:17:23.803 [2024-11-18 15:03:47.349609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:23.803 [2024-11-18 15:03:47.349711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89654 ] 00:17:24.062 [2024-11-18 15:03:47.484470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.062 [2024-11-18 15:03:47.558037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.001 15:03:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.001 15:03:48 -- common/autotest_common.sh@862 -- # return 0 00:17:25.001 15:03:48 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.001 [2024-11-18 15:03:48.442778] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:25.001 TLSTESTn1 00:17:25.001 15:03:48 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:25.568 15:03:48 -- target/tls.sh@205 -- # tgtconf='{ 00:17:25.568 "subsystems": [ 00:17:25.568 { 00:17:25.568 "subsystem": "iobuf", 00:17:25.568 "config": [ 00:17:25.568 { 00:17:25.568 "method": "iobuf_set_options", 00:17:25.568 "params": { 00:17:25.568 "large_bufsize": 135168, 00:17:25.568 "large_pool_count": 1024, 00:17:25.568 "small_bufsize": 8192, 00:17:25.568 "small_pool_count": 8192 00:17:25.568 } 00:17:25.568 } 00:17:25.568 ] 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "subsystem": "sock", 00:17:25.568 "config": [ 00:17:25.568 { 00:17:25.568 "method": "sock_impl_set_options", 00:17:25.568 "params": { 00:17:25.568 "enable_ktls": false, 00:17:25.568 "enable_placement_id": 0, 00:17:25.568 "enable_quickack": false, 00:17:25.568 "enable_recv_pipe": true, 00:17:25.568 "enable_zerocopy_send_client": false, 00:17:25.568 "enable_zerocopy_send_server": true, 00:17:25.568 "impl_name": "posix", 00:17:25.568 "recv_buf_size": 2097152, 00:17:25.568 "send_buf_size": 2097152, 00:17:25.568 "tls_version": 0, 00:17:25.568 "zerocopy_threshold": 0 00:17:25.568 } 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "method": "sock_impl_set_options", 00:17:25.568 "params": { 00:17:25.568 "enable_ktls": false, 00:17:25.568 "enable_placement_id": 0, 00:17:25.568 "enable_quickack": false, 00:17:25.568 "enable_recv_pipe": true, 00:17:25.568 "enable_zerocopy_send_client": false, 00:17:25.568 "enable_zerocopy_send_server": true, 00:17:25.568 "impl_name": "ssl", 00:17:25.568 "recv_buf_size": 4096, 00:17:25.568 "send_buf_size": 4096, 00:17:25.568 "tls_version": 0, 00:17:25.568 "zerocopy_threshold": 0 00:17:25.568 } 00:17:25.568 } 00:17:25.568 ] 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "subsystem": "vmd", 00:17:25.568 "config": [] 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "subsystem": "accel", 00:17:25.568 "config": [ 00:17:25.568 { 00:17:25.568 "method": "accel_set_options", 00:17:25.568 "params": { 00:17:25.568 "buf_count": 2048, 00:17:25.568 "large_cache_size": 16, 00:17:25.568 "sequence_count": 2048, 00:17:25.568 "small_cache_size": 128, 00:17:25.568 "task_count": 2048 00:17:25.568 } 00:17:25.568 } 00:17:25.568 ] 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "subsystem": "bdev", 00:17:25.568 "config": [ 00:17:25.568 { 00:17:25.568 "method": "bdev_set_options", 00:17:25.568 "params": { 00:17:25.568 "bdev_auto_examine": true, 00:17:25.568 "bdev_io_cache_size": 256, 00:17:25.568 "bdev_io_pool_size": 65535, 00:17:25.568 "iobuf_large_cache_size": 16, 00:17:25.568 "iobuf_small_cache_size": 128 00:17:25.568 } 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "method": "bdev_raid_set_options", 00:17:25.568 "params": { 00:17:25.568 "process_window_size_kb": 1024 00:17:25.568 } 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "method": "bdev_iscsi_set_options", 00:17:25.568 "params": { 00:17:25.568 "timeout_sec": 30 00:17:25.568 } 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "method": "bdev_nvme_set_options", 00:17:25.568 "params": { 00:17:25.568 "action_on_timeout": "none", 00:17:25.568 "allow_accel_sequence": false, 00:17:25.568 "arbitration_burst": 0, 00:17:25.568 "bdev_retry_count": 3, 00:17:25.568 "ctrlr_loss_timeout_sec": 0, 00:17:25.568 "delay_cmd_submit": true, 00:17:25.568 "fast_io_fail_timeout_sec": 0, 00:17:25.568 "generate_uuids": false, 00:17:25.568 "high_priority_weight": 0, 00:17:25.568 "io_path_stat": false, 00:17:25.568 "io_queue_requests": 0, 00:17:25.568 "keep_alive_timeout_ms": 10000, 00:17:25.568 "low_priority_weight": 0, 00:17:25.568 "medium_priority_weight": 0, 00:17:25.568 "nvme_adminq_poll_period_us": 10000, 00:17:25.568 "nvme_ioq_poll_period_us": 0, 00:17:25.568 "reconnect_delay_sec": 0, 00:17:25.568 "timeout_admin_us": 0, 00:17:25.568 "timeout_us": 0, 00:17:25.568 "transport_ack_timeout": 0, 00:17:25.568 "transport_retry_count": 4, 00:17:25.568 "transport_tos": 0 00:17:25.568 } 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "method": "bdev_nvme_set_hotplug", 00:17:25.568 "params": { 00:17:25.568 "enable": false, 00:17:25.568 "period_us": 100000 00:17:25.568 } 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "method": "bdev_malloc_create", 00:17:25.568 "params": { 00:17:25.568 "block_size": 4096, 00:17:25.568 "name": "malloc0", 00:17:25.568 "num_blocks": 8192, 00:17:25.568 "optimal_io_boundary": 0, 00:17:25.568 "physical_block_size": 4096, 00:17:25.568 "uuid": "c71c5672-56f5-4642-9ff9-0ad6420fa148" 00:17:25.568 } 00:17:25.568 }, 00:17:25.568 { 00:17:25.568 "method": "bdev_wait_for_examine" 00:17:25.568 } 00:17:25.568 ] 00:17:25.568 }, 00:17:25.568 { 00:17:25.569 "subsystem": "nbd", 00:17:25.569 "config": [] 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "subsystem": "scheduler", 00:17:25.569 "config": [ 00:17:25.569 { 00:17:25.569 "method": "framework_set_scheduler", 00:17:25.569 "params": { 00:17:25.569 "name": "static" 00:17:25.569 } 00:17:25.569 } 00:17:25.569 ] 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "subsystem": "nvmf", 00:17:25.569 "config": [ 00:17:25.569 { 00:17:25.569 "method": "nvmf_set_config", 00:17:25.569 "params": { 00:17:25.569 "admin_cmd_passthru": { 00:17:25.569 "identify_ctrlr": false 00:17:25.569 }, 00:17:25.569 "discovery_filter": "match_any" 00:17:25.569 } 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "method": "nvmf_set_max_subsystems", 00:17:25.569 "params": { 00:17:25.569 "max_subsystems": 1024 00:17:25.569 } 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "method": "nvmf_set_crdt", 00:17:25.569 "params": { 00:17:25.569 "crdt1": 0, 00:17:25.569 "crdt2": 0, 00:17:25.569 "crdt3": 0 00:17:25.569 } 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "method": "nvmf_create_transport", 00:17:25.569 "params": { 00:17:25.569 "abort_timeout_sec": 1, 00:17:25.569 "buf_cache_size": 4294967295, 00:17:25.569 "c2h_success": false, 00:17:25.569 "dif_insert_or_strip": false, 00:17:25.569 "in_capsule_data_size": 4096, 00:17:25.569 "io_unit_size": 131072, 00:17:25.569 "max_aq_depth": 128, 00:17:25.569 "max_io_qpairs_per_ctrlr": 127, 00:17:25.569 "max_io_size": 131072, 00:17:25.569 "max_queue_depth": 128, 00:17:25.569 "num_shared_buffers": 511, 00:17:25.569 "sock_priority": 0, 00:17:25.569 "trtype": "TCP", 00:17:25.569 "zcopy": false 00:17:25.569 } 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "method": "nvmf_create_subsystem", 00:17:25.569 "params": { 00:17:25.569 "allow_any_host": false, 00:17:25.569 "ana_reporting": false, 00:17:25.569 "max_cntlid": 65519, 00:17:25.569 "max_namespaces": 10, 00:17:25.569 "min_cntlid": 1, 00:17:25.569 "model_number": "SPDK bdev Controller", 00:17:25.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.569 "serial_number": "SPDK00000000000001" 00:17:25.569 } 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "method": "nvmf_subsystem_add_host", 00:17:25.569 "params": { 00:17:25.569 "host": "nqn.2016-06.io.spdk:host1", 00:17:25.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.569 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:25.569 } 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "method": "nvmf_subsystem_add_ns", 00:17:25.569 "params": { 00:17:25.569 "namespace": { 00:17:25.569 "bdev_name": "malloc0", 00:17:25.569 "nguid": "C71C567256F546429FF90AD6420FA148", 00:17:25.569 "nsid": 1, 00:17:25.569 "uuid": "c71c5672-56f5-4642-9ff9-0ad6420fa148" 00:17:25.569 }, 00:17:25.569 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:25.569 } 00:17:25.569 }, 00:17:25.569 { 00:17:25.569 "method": "nvmf_subsystem_add_listener", 00:17:25.569 "params": { 00:17:25.569 "listen_address": { 00:17:25.569 "adrfam": "IPv4", 00:17:25.569 "traddr": "10.0.0.2", 00:17:25.569 "trsvcid": "4420", 00:17:25.569 "trtype": "TCP" 00:17:25.569 }, 00:17:25.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.569 "secure_channel": true 00:17:25.569 } 00:17:25.569 } 00:17:25.569 ] 00:17:25.569 } 00:17:25.569 ] 00:17:25.569 }' 00:17:25.569 15:03:48 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:25.827 15:03:49 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:25.827 "subsystems": [ 00:17:25.827 { 00:17:25.827 "subsystem": "iobuf", 00:17:25.827 "config": [ 00:17:25.827 { 00:17:25.828 "method": "iobuf_set_options", 00:17:25.828 "params": { 00:17:25.828 "large_bufsize": 135168, 00:17:25.828 "large_pool_count": 1024, 00:17:25.828 "small_bufsize": 8192, 00:17:25.828 "small_pool_count": 8192 00:17:25.828 } 00:17:25.828 } 00:17:25.828 ] 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "subsystem": "sock", 00:17:25.828 "config": [ 00:17:25.828 { 00:17:25.828 "method": "sock_impl_set_options", 00:17:25.828 "params": { 00:17:25.828 "enable_ktls": false, 00:17:25.828 "enable_placement_id": 0, 00:17:25.828 "enable_quickack": false, 00:17:25.828 "enable_recv_pipe": true, 00:17:25.828 "enable_zerocopy_send_client": false, 00:17:25.828 "enable_zerocopy_send_server": true, 00:17:25.828 "impl_name": "posix", 00:17:25.828 "recv_buf_size": 2097152, 00:17:25.828 "send_buf_size": 2097152, 00:17:25.828 "tls_version": 0, 00:17:25.828 "zerocopy_threshold": 0 00:17:25.828 } 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "method": "sock_impl_set_options", 00:17:25.828 "params": { 00:17:25.828 "enable_ktls": false, 00:17:25.828 "enable_placement_id": 0, 00:17:25.828 "enable_quickack": false, 00:17:25.828 "enable_recv_pipe": true, 00:17:25.828 "enable_zerocopy_send_client": false, 00:17:25.828 "enable_zerocopy_send_server": true, 00:17:25.828 "impl_name": "ssl", 00:17:25.828 "recv_buf_size": 4096, 00:17:25.828 "send_buf_size": 4096, 00:17:25.828 "tls_version": 0, 00:17:25.828 "zerocopy_threshold": 0 00:17:25.828 } 00:17:25.828 } 00:17:25.828 ] 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "subsystem": "vmd", 00:17:25.828 "config": [] 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "subsystem": "accel", 00:17:25.828 "config": [ 00:17:25.828 { 00:17:25.828 "method": "accel_set_options", 00:17:25.828 "params": { 00:17:25.828 "buf_count": 2048, 00:17:25.828 "large_cache_size": 16, 00:17:25.828 "sequence_count": 2048, 00:17:25.828 "small_cache_size": 128, 00:17:25.828 "task_count": 2048 00:17:25.828 } 00:17:25.828 } 00:17:25.828 ] 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "subsystem": "bdev", 00:17:25.828 "config": [ 00:17:25.828 { 00:17:25.828 "method": "bdev_set_options", 00:17:25.828 "params": { 00:17:25.828 "bdev_auto_examine": true, 00:17:25.828 "bdev_io_cache_size": 256, 00:17:25.828 "bdev_io_pool_size": 65535, 00:17:25.828 "iobuf_large_cache_size": 16, 00:17:25.828 "iobuf_small_cache_size": 128 00:17:25.828 } 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "method": "bdev_raid_set_options", 00:17:25.828 "params": { 00:17:25.828 "process_window_size_kb": 1024 00:17:25.828 } 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "method": "bdev_iscsi_set_options", 00:17:25.828 "params": { 00:17:25.828 "timeout_sec": 30 00:17:25.828 } 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "method": "bdev_nvme_set_options", 00:17:25.828 "params": { 00:17:25.828 "action_on_timeout": "none", 00:17:25.828 "allow_accel_sequence": false, 00:17:25.828 "arbitration_burst": 0, 00:17:25.828 "bdev_retry_count": 3, 00:17:25.828 "ctrlr_loss_timeout_sec": 0, 00:17:25.828 "delay_cmd_submit": true, 00:17:25.828 "fast_io_fail_timeout_sec": 0, 00:17:25.828 "generate_uuids": false, 00:17:25.828 "high_priority_weight": 0, 00:17:25.828 "io_path_stat": false, 00:17:25.828 "io_queue_requests": 512, 00:17:25.828 "keep_alive_timeout_ms": 10000, 00:17:25.828 "low_priority_weight": 0, 00:17:25.828 "medium_priority_weight": 0, 00:17:25.828 "nvme_adminq_poll_period_us": 10000, 00:17:25.828 "nvme_ioq_poll_period_us": 0, 00:17:25.828 "reconnect_delay_sec": 0, 00:17:25.828 "timeout_admin_us": 0, 00:17:25.828 "timeout_us": 0, 00:17:25.828 "transport_ack_timeout": 0, 00:17:25.828 "transport_retry_count": 4, 00:17:25.828 "transport_tos": 0 00:17:25.828 } 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "method": "bdev_nvme_attach_controller", 00:17:25.828 "params": { 00:17:25.828 "adrfam": "IPv4", 00:17:25.828 "ctrlr_loss_timeout_sec": 0, 00:17:25.828 "ddgst": false, 00:17:25.828 "fast_io_fail_timeout_sec": 0, 00:17:25.828 "hdgst": false, 00:17:25.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:25.828 "name": "TLSTEST", 00:17:25.828 "prchk_guard": false, 00:17:25.828 "prchk_reftag": false, 00:17:25.828 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:25.828 "reconnect_delay_sec": 0, 00:17:25.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.828 "traddr": "10.0.0.2", 00:17:25.828 "trsvcid": "4420", 00:17:25.828 "trtype": "TCP" 00:17:25.828 } 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "method": "bdev_nvme_set_hotplug", 00:17:25.828 "params": { 00:17:25.828 "enable": false, 00:17:25.828 "period_us": 100000 00:17:25.828 } 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "method": "bdev_wait_for_examine" 00:17:25.828 } 00:17:25.828 ] 00:17:25.828 }, 00:17:25.828 { 00:17:25.828 "subsystem": "nbd", 00:17:25.828 "config": [] 00:17:25.829 } 00:17:25.829 ] 00:17:25.829 }' 00:17:25.829 15:03:49 -- target/tls.sh@208 -- # killprocess 89654 00:17:25.829 15:03:49 -- common/autotest_common.sh@936 -- # '[' -z 89654 ']' 00:17:25.829 15:03:49 -- common/autotest_common.sh@940 -- # kill -0 89654 00:17:25.829 15:03:49 -- common/autotest_common.sh@941 -- # uname 00:17:25.829 15:03:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:25.829 15:03:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89654 00:17:25.829 15:03:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:25.829 15:03:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:25.829 killing process with pid 89654 00:17:25.829 15:03:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89654' 00:17:25.829 Received shutdown signal, test time was about 10.000000 seconds 00:17:25.829 00:17:25.829 Latency(us) 00:17:25.829 [2024-11-18T15:03:49.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.829 [2024-11-18T15:03:49.419Z] =================================================================================================================== 00:17:25.829 [2024-11-18T15:03:49.419Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:25.829 15:03:49 -- common/autotest_common.sh@955 -- # kill 89654 00:17:25.829 15:03:49 -- common/autotest_common.sh@960 -- # wait 89654 00:17:26.087 15:03:49 -- target/tls.sh@209 -- # killprocess 89556 00:17:26.087 15:03:49 -- common/autotest_common.sh@936 -- # '[' -z 89556 ']' 00:17:26.087 15:03:49 -- common/autotest_common.sh@940 -- # kill -0 89556 00:17:26.087 15:03:49 -- common/autotest_common.sh@941 -- # uname 00:17:26.087 15:03:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.087 15:03:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89556 00:17:26.087 15:03:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:26.087 15:03:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:26.087 killing process with pid 89556 00:17:26.087 15:03:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89556' 00:17:26.087 15:03:49 -- common/autotest_common.sh@955 -- # kill 89556 00:17:26.087 15:03:49 -- common/autotest_common.sh@960 -- # wait 89556 00:17:26.347 15:03:49 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:26.347 15:03:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:26.347 15:03:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:26.347 15:03:49 -- target/tls.sh@212 -- # echo '{ 00:17:26.347 "subsystems": [ 00:17:26.347 { 00:17:26.347 "subsystem": "iobuf", 00:17:26.347 "config": [ 00:17:26.347 { 00:17:26.347 "method": "iobuf_set_options", 00:17:26.347 "params": { 00:17:26.347 "large_bufsize": 135168, 00:17:26.347 "large_pool_count": 1024, 00:17:26.347 "small_bufsize": 8192, 00:17:26.347 "small_pool_count": 8192 00:17:26.347 } 00:17:26.347 } 00:17:26.347 ] 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "subsystem": "sock", 00:17:26.347 "config": [ 00:17:26.347 { 00:17:26.347 "method": "sock_impl_set_options", 00:17:26.347 "params": { 00:17:26.347 "enable_ktls": false, 00:17:26.347 "enable_placement_id": 0, 00:17:26.347 "enable_quickack": false, 00:17:26.347 "enable_recv_pipe": true, 00:17:26.347 "enable_zerocopy_send_client": false, 00:17:26.347 "enable_zerocopy_send_server": true, 00:17:26.347 "impl_name": "posix", 00:17:26.347 "recv_buf_size": 2097152, 00:17:26.347 "send_buf_size": 2097152, 00:17:26.347 "tls_version": 0, 00:17:26.347 "zerocopy_threshold": 0 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "sock_impl_set_options", 00:17:26.347 "params": { 00:17:26.347 "enable_ktls": false, 00:17:26.347 "enable_placement_id": 0, 00:17:26.347 "enable_quickack": false, 00:17:26.347 "enable_recv_pipe": true, 00:17:26.347 "enable_zerocopy_send_client": false, 00:17:26.347 "enable_zerocopy_send_server": true, 00:17:26.347 "impl_name": "ssl", 00:17:26.347 "recv_buf_size": 4096, 00:17:26.347 "send_buf_size": 4096, 00:17:26.347 "tls_version": 0, 00:17:26.347 "zerocopy_threshold": 0 00:17:26.347 } 00:17:26.347 } 00:17:26.347 ] 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "subsystem": "vmd", 00:17:26.347 "config": [] 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "subsystem": "accel", 00:17:26.347 "config": [ 00:17:26.347 { 00:17:26.347 "method": "accel_set_options", 00:17:26.347 "params": { 00:17:26.347 "buf_count": 2048, 00:17:26.347 "large_cache_size": 16, 00:17:26.347 "sequence_count": 2048, 00:17:26.347 "small_cache_size": 128, 00:17:26.347 "task_count": 2048 00:17:26.347 } 00:17:26.347 } 00:17:26.347 ] 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "subsystem": "bdev", 00:17:26.347 "config": [ 00:17:26.347 { 00:17:26.347 "method": "bdev_set_options", 00:17:26.347 "params": { 00:17:26.347 "bdev_auto_examine": true, 00:17:26.347 "bdev_io_cache_size": 256, 00:17:26.347 "bdev_io_pool_size": 65535, 00:17:26.347 "iobuf_large_cache_size": 16, 00:17:26.347 "iobuf_small_cache_size": 128 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "bdev_raid_set_options", 00:17:26.347 "params": { 00:17:26.347 "process_window_size_kb": 1024 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "bdev_iscsi_set_options", 00:17:26.347 "params": { 00:17:26.347 "timeout_sec": 30 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "bdev_nvme_set_options", 00:17:26.347 "params": { 00:17:26.347 "action_on_timeout": "none", 00:17:26.347 "allow_accel_sequence": false, 00:17:26.347 "arbitration_burst": 0, 00:17:26.347 "bdev_retry_count": 3, 00:17:26.347 "ctrlr_loss_timeout_sec": 0, 00:17:26.347 "delay_cmd_submit": true, 00:17:26.347 "fast_io_fail_timeout_sec": 0, 00:17:26.347 "generate_uuids": false, 00:17:26.347 "high_priority_weight": 0, 00:17:26.347 "io_path_stat": false, 00:17:26.347 "io_queue_requests": 0, 00:17:26.347 "keep_alive_timeout_ms": 10000, 00:17:26.347 "low_priority_weight": 0, 00:17:26.347 "medium_priority_weight": 0, 00:17:26.347 "nvme_adminq_poll_period_us": 10000, 00:17:26.347 "nvme_ioq_poll_period_us": 0, 00:17:26.347 "reconnect_delay_sec": 0, 00:17:26.347 "timeout_admin_us": 0, 00:17:26.347 "timeout_us": 0, 00:17:26.347 "transport_ack_timeout": 0, 00:17:26.347 "transport_retry_count": 4, 00:17:26.347 "transport_tos": 0 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "bdev_nvme_set_hotplug", 00:17:26.347 "params": { 00:17:26.347 "enable": false, 00:17:26.347 "period_us": 100000 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "bdev_malloc_create", 00:17:26.347 "params": { 00:17:26.347 "block_size": 4096, 00:17:26.347 "name": "malloc0", 00:17:26.347 "num_blocks": 8192, 00:17:26.347 "optimal_io_boundary": 0, 00:17:26.347 "physical_block_size": 4096, 00:17:26.347 "uuid": "c71c5672-56f5-4642-9ff9-0ad6420fa148" 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "bdev_wait_for_examine" 00:17:26.347 } 00:17:26.347 ] 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "subsystem": "nbd", 00:17:26.347 "config": [] 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "subsystem": "scheduler", 00:17:26.347 "config": [ 00:17:26.347 { 00:17:26.347 "method": "framework_set_scheduler", 00:17:26.347 "params": { 00:17:26.347 "name": "static" 00:17:26.347 } 00:17:26.347 } 00:17:26.347 ] 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "subsystem": "nvmf", 00:17:26.347 "config": [ 00:17:26.347 { 00:17:26.347 "method": "nvmf_set_config", 00:17:26.347 "params": { 00:17:26.347 "admin_cmd_passthru": { 00:17:26.347 "identify_ctrlr": false 00:17:26.347 }, 00:17:26.347 "discovery_filter": "match_any" 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "nvmf_set_max_subsystems", 00:17:26.347 "params": { 00:17:26.347 "max_subsystems": 1024 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.347 "method": "nvmf_set_crdt", 00:17:26.347 "params": { 00:17:26.347 "crdt1": 0, 00:17:26.347 "crdt2": 0, 00:17:26.347 "crdt3": 0 00:17:26.347 } 00:17:26.347 }, 00:17:26.347 { 00:17:26.348 "method": "nvmf_create_transport", 00:17:26.348 "params": { 00:17:26.348 "abort_timeout_sec": 1, 00:17:26.348 "buf_cache_size": 4294967295, 00:17:26.348 "c2h_success": false, 00:17:26.348 "dif_insert_or_strip": false, 00:17:26.348 "in_capsule_data_size": 4096, 00:17:26.348 "io_unit_size": 131072, 00:17:26.348 "max_aq_depth": 128, 00:17:26.348 "max_io_qpairs_per_ctrlr": 127, 00:17:26.348 "max_io_size": 131072, 00:17:26.348 "max_queue_depth": 128, 00:17:26.348 "num_shared_buffers": 511, 00:17:26.348 "sock_priority": 0, 00:17:26.348 "trtype": "TCP", 00:17:26.348 "zcopy": false 00:17:26.348 } 00:17:26.348 }, 00:17:26.348 { 00:17:26.348 "method": "nvmf_create_subsystem", 00:17:26.348 "params": { 00:17:26.348 "allow_any_host": false, 00:17:26.348 "ana_reporting": false, 00:17:26.348 "max_cntlid": 65519, 00:17:26.348 "max_namespaces": 10, 00:17:26.348 "min_cntlid": 1, 00:17:26.348 "model_number": "SPDK bdev Controller", 00:17:26.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.348 "serial_number": "SPDK00000000000001" 00:17:26.348 } 00:17:26.348 }, 00:17:26.348 { 00:17:26.348 "method": "nvmf_subsystem_add_host", 00:17:26.348 "params": { 00:17:26.348 "host": "nqn.2016-06.io.spdk:host1", 00:17:26.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.348 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:26.348 } 00:17:26.348 }, 00:17:26.348 { 00:17:26.348 "method": "nvmf_subsystem_add_ns", 00:17:26.348 "params": { 00:17:26.348 "namespace": { 00:17:26.348 "bdev_name": "malloc0", 00:17:26.348 "nguid": "C71C567256F546429FF90AD6420FA148", 00:17:26.348 "nsid": 1, 00:17:26.348 "uuid": "c71c5672-56f5-4642-9ff9-0ad6420fa148" 00:17:26.348 }, 00:17:26.348 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:26.348 } 00:17:26.348 }, 00:17:26.348 { 00:17:26.348 "method": "nvmf_subsystem_add_listener", 00:17:26.348 "params": { 00:17:26.348 "listen_address": { 00:17:26.348 "adrfam": "IPv4", 00:17:26.348 "traddr": "10.0.0.2", 00:17:26.348 "trsvcid": "4420", 00:17:26.348 "trtype": "TCP" 00:17:26.348 }, 00:17:26.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.348 "secure_channel": true 00:17:26.348 } 00:17:26.348 } 00:17:26.348 ] 00:17:26.348 } 00:17:26.348 ] 00:17:26.348 }' 00:17:26.348 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:17:26.348 15:03:49 -- nvmf/common.sh@469 -- # nvmfpid=89733 00:17:26.348 15:03:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:26.348 15:03:49 -- nvmf/common.sh@470 -- # waitforlisten 89733 00:17:26.348 15:03:49 -- common/autotest_common.sh@829 -- # '[' -z 89733 ']' 00:17:26.348 15:03:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.348 15:03:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.348 15:03:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.348 15:03:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.348 15:03:49 -- common/autotest_common.sh@10 -- # set +x 00:17:26.348 [2024-11-18 15:03:49.774309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:26.348 [2024-11-18 15:03:49.774436] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.348 [2024-11-18 15:03:49.911206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.607 [2024-11-18 15:03:49.968961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:26.607 [2024-11-18 15:03:49.969109] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.607 [2024-11-18 15:03:49.969121] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.607 [2024-11-18 15:03:49.969130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.607 [2024-11-18 15:03:49.969157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.866 [2024-11-18 15:03:50.212434] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.866 [2024-11-18 15:03:50.244387] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.866 [2024-11-18 15:03:50.244624] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.125 15:03:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.125 15:03:50 -- common/autotest_common.sh@862 -- # return 0 00:17:27.125 15:03:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:27.125 15:03:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:27.125 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:17:27.383 15:03:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.383 15:03:50 -- target/tls.sh@216 -- # bdevperf_pid=89777 00:17:27.383 15:03:50 -- target/tls.sh@217 -- # waitforlisten 89777 /var/tmp/bdevperf.sock 00:17:27.383 15:03:50 -- common/autotest_common.sh@829 -- # '[' -z 89777 ']' 00:17:27.383 15:03:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.383 15:03:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.384 15:03:50 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:27.384 15:03:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.384 15:03:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.384 15:03:50 -- target/tls.sh@213 -- # echo '{ 00:17:27.384 "subsystems": [ 00:17:27.384 { 00:17:27.384 "subsystem": "iobuf", 00:17:27.384 "config": [ 00:17:27.384 { 00:17:27.384 "method": "iobuf_set_options", 00:17:27.384 "params": { 00:17:27.384 "large_bufsize": 135168, 00:17:27.384 "large_pool_count": 1024, 00:17:27.384 "small_bufsize": 8192, 00:17:27.384 "small_pool_count": 8192 00:17:27.384 } 00:17:27.384 } 00:17:27.384 ] 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "subsystem": "sock", 00:17:27.384 "config": [ 00:17:27.384 { 00:17:27.384 "method": "sock_impl_set_options", 00:17:27.384 "params": { 00:17:27.384 "enable_ktls": false, 00:17:27.384 "enable_placement_id": 0, 00:17:27.384 "enable_quickack": false, 00:17:27.384 "enable_recv_pipe": true, 00:17:27.384 "enable_zerocopy_send_client": false, 00:17:27.384 "enable_zerocopy_send_server": true, 00:17:27.384 "impl_name": "posix", 00:17:27.384 "recv_buf_size": 2097152, 00:17:27.384 "send_buf_size": 2097152, 00:17:27.384 "tls_version": 0, 00:17:27.384 "zerocopy_threshold": 0 00:17:27.384 } 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "method": "sock_impl_set_options", 00:17:27.384 "params": { 00:17:27.384 "enable_ktls": false, 00:17:27.384 "enable_placement_id": 0, 00:17:27.384 "enable_quickack": false, 00:17:27.384 "enable_recv_pipe": true, 00:17:27.384 "enable_zerocopy_send_client": false, 00:17:27.384 "enable_zerocopy_send_server": true, 00:17:27.384 "impl_name": "ssl", 00:17:27.384 "recv_buf_size": 4096, 00:17:27.384 "send_buf_size": 4096, 00:17:27.384 "tls_version": 0, 00:17:27.384 "zerocopy_threshold": 0 00:17:27.384 } 00:17:27.384 } 00:17:27.384 ] 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "subsystem": "vmd", 00:17:27.384 "config": [] 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "subsystem": "accel", 00:17:27.384 "config": [ 00:17:27.384 { 00:17:27.384 "method": "accel_set_options", 00:17:27.384 "params": { 00:17:27.384 "buf_count": 2048, 00:17:27.384 "large_cache_size": 16, 00:17:27.384 "sequence_count": 2048, 00:17:27.384 "small_cache_size": 128, 00:17:27.384 "task_count": 2048 00:17:27.384 } 00:17:27.384 } 00:17:27.384 ] 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "subsystem": "bdev", 00:17:27.384 "config": [ 00:17:27.384 { 00:17:27.384 "method": "bdev_set_options", 00:17:27.384 "params": { 00:17:27.384 "bdev_auto_examine": true, 00:17:27.384 "bdev_io_cache_size": 256, 00:17:27.384 "bdev_io_pool_size": 65535, 00:17:27.384 "iobuf_large_cache_size": 16, 00:17:27.384 "iobuf_small_cache_size": 128 00:17:27.384 } 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "method": "bdev_raid_set_options", 00:17:27.384 "params": { 00:17:27.384 "process_window_size_kb": 1024 00:17:27.384 } 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "method": "bdev_iscsi_set_options", 00:17:27.384 "params": { 00:17:27.384 "timeout_sec": 30 00:17:27.384 } 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "method": "bdev_nvme_set_options", 00:17:27.384 "params": { 00:17:27.384 "action_on_timeout": "none", 00:17:27.384 "allow_accel_sequence": false, 00:17:27.384 "arbitration_burst": 0, 00:17:27.384 "bdev_retry_count": 3, 00:17:27.384 "ctrlr_loss_timeout_sec": 0, 00:17:27.384 "delay_cmd_submit": true, 00:17:27.384 "fast_io_fail_timeout_sec": 0, 00:17:27.384 "generate_uuids": false, 00:17:27.384 "high_priority_weight": 0, 00:17:27.384 "io_path_stat": false, 00:17:27.384 "io_queue_requests": 512, 00:17:27.384 "keep_alive_timeout_ms": 10000, 00:17:27.384 "low_priority_weight": 0, 00:17:27.384 "medium_priority_weight": 0, 00:17:27.384 "nvme_adminq_poll_period_us": 10000, 00:17:27.384 "nvme_ioq_poll_period_us": 0, 00:17:27.384 "reconnect_delay_sec": 0, 00:17:27.384 "timeout_admin_us": 0, 00:17:27.384 "timeout_us": 0, 00:17:27.384 "transport_ack_timeout": 0, 00:17:27.384 "transport_retry_count": 4, 00:17:27.384 "transport_tos": 0 00:17:27.384 } 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "method": "bdev_nvme_attach_controller", 00:17:27.384 "params": { 00:17:27.384 "adrfam": "IPv4", 00:17:27.384 "ctrlr_loss_timeout_sec": 0, 00:17:27.384 "ddgst": false, 00:17:27.384 "fast_io_fail_timeout_sec": 0, 00:17:27.384 "hdgst": false, 00:17:27.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:27.384 "name": "TLSTEST", 00:17:27.384 "prchk_guard": false, 00:17:27.384 "prchk_reftag": false, 00:17:27.384 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:27.384 "reconnect_delay_sec": 0, 00:17:27.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.384 "traddr": "10.0.0.2", 00:17:27.384 "trsvcid": "4420", 00:17:27.384 "trtype": "TCP" 00:17:27.384 } 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "method": "bdev_nvme_set_hotplug", 00:17:27.384 "params": { 00:17:27.384 "enable": false, 00:17:27.384 "period_us": 100000 00:17:27.384 } 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "method": "bdev_wait_for_examine" 00:17:27.384 } 00:17:27.384 ] 00:17:27.384 }, 00:17:27.384 { 00:17:27.384 "subsystem": "nbd", 00:17:27.384 "config": [] 00:17:27.384 } 00:17:27.384 ] 00:17:27.384 }' 00:17:27.384 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:17:27.384 [2024-11-18 15:03:50.767153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:27.384 [2024-11-18 15:03:50.767272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89777 ] 00:17:27.384 [2024-11-18 15:03:50.907866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.643 [2024-11-18 15:03:50.972007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.643 [2024-11-18 15:03:51.121268] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.212 15:03:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.212 15:03:51 -- common/autotest_common.sh@862 -- # return 0 00:17:28.212 15:03:51 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:28.212 Running I/O for 10 seconds... 00:17:38.189 00:17:38.189 Latency(us) 00:17:38.189 [2024-11-18T15:04:01.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.189 [2024-11-18T15:04:01.779Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:38.189 Verification LBA range: start 0x0 length 0x2000 00:17:38.189 TLSTESTn1 : 10.01 6626.25 25.88 0.00 0.00 19288.25 5064.15 23831.27 00:17:38.189 [2024-11-18T15:04:01.779Z] =================================================================================================================== 00:17:38.189 [2024-11-18T15:04:01.779Z] Total : 6626.25 25.88 0.00 0.00 19288.25 5064.15 23831.27 00:17:38.189 0 00:17:38.447 15:04:01 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.447 15:04:01 -- target/tls.sh@223 -- # killprocess 89777 00:17:38.447 15:04:01 -- common/autotest_common.sh@936 -- # '[' -z 89777 ']' 00:17:38.447 15:04:01 -- common/autotest_common.sh@940 -- # kill -0 89777 00:17:38.447 15:04:01 -- common/autotest_common.sh@941 -- # uname 00:17:38.447 15:04:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.447 15:04:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89777 00:17:38.447 killing process with pid 89777 00:17:38.447 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.447 00:17:38.447 Latency(us) 00:17:38.447 [2024-11-18T15:04:02.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.447 [2024-11-18T15:04:02.037Z] =================================================================================================================== 00:17:38.447 [2024-11-18T15:04:02.037Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.447 15:04:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:38.448 15:04:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:38.448 15:04:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89777' 00:17:38.448 15:04:01 -- common/autotest_common.sh@955 -- # kill 89777 00:17:38.448 15:04:01 -- common/autotest_common.sh@960 -- # wait 89777 00:17:38.448 15:04:02 -- target/tls.sh@224 -- # killprocess 89733 00:17:38.448 15:04:02 -- common/autotest_common.sh@936 -- # '[' -z 89733 ']' 00:17:38.448 15:04:02 -- common/autotest_common.sh@940 -- # kill -0 89733 00:17:38.448 15:04:02 -- common/autotest_common.sh@941 -- # uname 00:17:38.448 15:04:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.448 15:04:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89733 00:17:38.706 killing process with pid 89733 00:17:38.706 15:04:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.706 15:04:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.706 15:04:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89733' 00:17:38.706 15:04:02 -- common/autotest_common.sh@955 -- # kill 89733 00:17:38.706 15:04:02 -- common/autotest_common.sh@960 -- # wait 89733 00:17:38.965 15:04:02 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:38.965 15:04:02 -- target/tls.sh@227 -- # cleanup 00:17:38.965 15:04:02 -- target/tls.sh@15 -- # process_shm --id 0 00:17:38.965 15:04:02 -- common/autotest_common.sh@806 -- # type=--id 00:17:38.965 15:04:02 -- common/autotest_common.sh@807 -- # id=0 00:17:38.965 15:04:02 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:38.965 15:04:02 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:38.965 15:04:02 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:38.965 15:04:02 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:38.965 15:04:02 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:38.965 15:04:02 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:38.965 nvmf_trace.0 00:17:38.965 15:04:02 -- common/autotest_common.sh@821 -- # return 0 00:17:38.965 15:04:02 -- target/tls.sh@16 -- # killprocess 89777 00:17:38.965 Process with pid 89777 is not found 00:17:38.965 15:04:02 -- common/autotest_common.sh@936 -- # '[' -z 89777 ']' 00:17:38.965 15:04:02 -- common/autotest_common.sh@940 -- # kill -0 89777 00:17:38.965 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89777) - No such process 00:17:38.965 15:04:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89777 is not found' 00:17:38.965 15:04:02 -- target/tls.sh@17 -- # nvmftestfini 00:17:38.965 15:04:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:38.965 15:04:02 -- nvmf/common.sh@116 -- # sync 00:17:38.965 15:04:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:38.965 15:04:02 -- nvmf/common.sh@119 -- # set +e 00:17:38.965 15:04:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:38.965 15:04:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:38.965 rmmod nvme_tcp 00:17:38.965 rmmod nvme_fabrics 00:17:38.965 rmmod nvme_keyring 00:17:38.965 15:04:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:38.965 15:04:02 -- nvmf/common.sh@123 -- # set -e 00:17:38.965 15:04:02 -- nvmf/common.sh@124 -- # return 0 00:17:38.965 15:04:02 -- nvmf/common.sh@477 -- # '[' -n 89733 ']' 00:17:38.965 Process with pid 89733 is not found 00:17:38.965 15:04:02 -- nvmf/common.sh@478 -- # killprocess 89733 00:17:38.965 15:04:02 -- common/autotest_common.sh@936 -- # '[' -z 89733 ']' 00:17:38.965 15:04:02 -- common/autotest_common.sh@940 -- # kill -0 89733 00:17:38.965 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89733) - No such process 00:17:38.965 15:04:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89733 is not found' 00:17:38.965 15:04:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:38.965 15:04:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:38.965 15:04:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:38.965 15:04:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.965 15:04:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:38.965 15:04:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.965 15:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.965 15:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.965 15:04:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:38.965 15:04:02 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.965 00:17:38.965 real 1m10.829s 00:17:38.965 user 1m45.155s 00:17:38.965 sys 0m27.000s 00:17:38.965 15:04:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:38.965 15:04:02 -- common/autotest_common.sh@10 -- # set +x 00:17:38.965 ************************************ 00:17:38.965 END TEST nvmf_tls 00:17:38.965 ************************************ 00:17:39.224 15:04:02 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:39.224 15:04:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:39.224 15:04:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.224 15:04:02 -- common/autotest_common.sh@10 -- # set +x 00:17:39.224 ************************************ 00:17:39.224 START TEST nvmf_fips 00:17:39.224 ************************************ 00:17:39.224 15:04:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:39.224 * Looking for test storage... 00:17:39.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:39.224 15:04:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:39.224 15:04:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:39.224 15:04:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:39.224 15:04:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:39.224 15:04:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:39.224 15:04:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:39.224 15:04:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:39.225 15:04:02 -- scripts/common.sh@335 -- # IFS=.-: 00:17:39.225 15:04:02 -- scripts/common.sh@335 -- # read -ra ver1 00:17:39.225 15:04:02 -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.225 15:04:02 -- scripts/common.sh@336 -- # read -ra ver2 00:17:39.225 15:04:02 -- scripts/common.sh@337 -- # local 'op=<' 00:17:39.225 15:04:02 -- scripts/common.sh@339 -- # ver1_l=2 00:17:39.225 15:04:02 -- scripts/common.sh@340 -- # ver2_l=1 00:17:39.225 15:04:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:39.225 15:04:02 -- scripts/common.sh@343 -- # case "$op" in 00:17:39.225 15:04:02 -- scripts/common.sh@344 -- # : 1 00:17:39.225 15:04:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:39.225 15:04:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.225 15:04:02 -- scripts/common.sh@364 -- # decimal 1 00:17:39.225 15:04:02 -- scripts/common.sh@352 -- # local d=1 00:17:39.225 15:04:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.225 15:04:02 -- scripts/common.sh@354 -- # echo 1 00:17:39.225 15:04:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:39.225 15:04:02 -- scripts/common.sh@365 -- # decimal 2 00:17:39.225 15:04:02 -- scripts/common.sh@352 -- # local d=2 00:17:39.225 15:04:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.225 15:04:02 -- scripts/common.sh@354 -- # echo 2 00:17:39.225 15:04:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:39.225 15:04:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:39.225 15:04:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:39.225 15:04:02 -- scripts/common.sh@367 -- # return 0 00:17:39.225 15:04:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.225 15:04:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:39.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.225 --rc genhtml_branch_coverage=1 00:17:39.225 --rc genhtml_function_coverage=1 00:17:39.225 --rc genhtml_legend=1 00:17:39.225 --rc geninfo_all_blocks=1 00:17:39.225 --rc geninfo_unexecuted_blocks=1 00:17:39.225 00:17:39.225 ' 00:17:39.225 15:04:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:39.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.225 --rc genhtml_branch_coverage=1 00:17:39.225 --rc genhtml_function_coverage=1 00:17:39.225 --rc genhtml_legend=1 00:17:39.225 --rc geninfo_all_blocks=1 00:17:39.225 --rc geninfo_unexecuted_blocks=1 00:17:39.225 00:17:39.225 ' 00:17:39.225 15:04:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:39.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.225 --rc genhtml_branch_coverage=1 00:17:39.225 --rc genhtml_function_coverage=1 00:17:39.225 --rc genhtml_legend=1 00:17:39.225 --rc geninfo_all_blocks=1 00:17:39.225 --rc geninfo_unexecuted_blocks=1 00:17:39.225 00:17:39.225 ' 00:17:39.225 15:04:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:39.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.225 --rc genhtml_branch_coverage=1 00:17:39.225 --rc genhtml_function_coverage=1 00:17:39.225 --rc genhtml_legend=1 00:17:39.225 --rc geninfo_all_blocks=1 00:17:39.225 --rc geninfo_unexecuted_blocks=1 00:17:39.225 00:17:39.225 ' 00:17:39.225 15:04:02 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.225 15:04:02 -- nvmf/common.sh@7 -- # uname -s 00:17:39.225 15:04:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.225 15:04:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.225 15:04:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.225 15:04:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.225 15:04:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.225 15:04:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.225 15:04:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.225 15:04:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.225 15:04:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.225 15:04:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.225 15:04:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:17:39.225 15:04:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:17:39.225 15:04:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.225 15:04:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.225 15:04:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.225 15:04:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.225 15:04:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.225 15:04:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.225 15:04:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.225 15:04:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.225 15:04:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.225 15:04:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.225 15:04:02 -- paths/export.sh@5 -- # export PATH 00:17:39.225 15:04:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.225 15:04:02 -- nvmf/common.sh@46 -- # : 0 00:17:39.225 15:04:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:39.225 15:04:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:39.225 15:04:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:39.225 15:04:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.225 15:04:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.225 15:04:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:39.225 15:04:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:39.225 15:04:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:39.225 15:04:02 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.225 15:04:02 -- fips/fips.sh@89 -- # check_openssl_version 00:17:39.225 15:04:02 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:39.225 15:04:02 -- fips/fips.sh@85 -- # openssl version 00:17:39.225 15:04:02 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:39.225 15:04:02 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:39.225 15:04:02 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:39.225 15:04:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:39.225 15:04:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:39.225 15:04:02 -- scripts/common.sh@335 -- # IFS=.-: 00:17:39.225 15:04:02 -- scripts/common.sh@335 -- # read -ra ver1 00:17:39.225 15:04:02 -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.225 15:04:02 -- scripts/common.sh@336 -- # read -ra ver2 00:17:39.225 15:04:02 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:39.225 15:04:02 -- scripts/common.sh@339 -- # ver1_l=3 00:17:39.225 15:04:02 -- scripts/common.sh@340 -- # ver2_l=3 00:17:39.225 15:04:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:39.225 15:04:02 -- scripts/common.sh@343 -- # case "$op" in 00:17:39.225 15:04:02 -- scripts/common.sh@347 -- # : 1 00:17:39.225 15:04:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:39.225 15:04:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.225 15:04:02 -- scripts/common.sh@364 -- # decimal 3 00:17:39.484 15:04:02 -- scripts/common.sh@352 -- # local d=3 00:17:39.484 15:04:02 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:39.484 15:04:02 -- scripts/common.sh@354 -- # echo 3 00:17:39.484 15:04:02 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:39.484 15:04:02 -- scripts/common.sh@365 -- # decimal 3 00:17:39.484 15:04:02 -- scripts/common.sh@352 -- # local d=3 00:17:39.484 15:04:02 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:39.484 15:04:02 -- scripts/common.sh@354 -- # echo 3 00:17:39.484 15:04:02 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:39.484 15:04:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:39.484 15:04:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:39.484 15:04:02 -- scripts/common.sh@363 -- # (( v++ )) 00:17:39.484 15:04:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.484 15:04:02 -- scripts/common.sh@364 -- # decimal 1 00:17:39.484 15:04:02 -- scripts/common.sh@352 -- # local d=1 00:17:39.484 15:04:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.484 15:04:02 -- scripts/common.sh@354 -- # echo 1 00:17:39.484 15:04:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:39.484 15:04:02 -- scripts/common.sh@365 -- # decimal 0 00:17:39.484 15:04:02 -- scripts/common.sh@352 -- # local d=0 00:17:39.484 15:04:02 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:39.484 15:04:02 -- scripts/common.sh@354 -- # echo 0 00:17:39.484 15:04:02 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:39.484 15:04:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:39.484 15:04:02 -- scripts/common.sh@366 -- # return 0 00:17:39.484 15:04:02 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:39.484 15:04:02 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:39.484 15:04:02 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:39.484 15:04:02 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:39.485 15:04:02 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:39.485 15:04:02 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:39.485 15:04:02 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:39.485 15:04:02 -- fips/fips.sh@113 -- # build_openssl_config 00:17:39.485 15:04:02 -- fips/fips.sh@37 -- # cat 00:17:39.485 15:04:02 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:39.485 15:04:02 -- fips/fips.sh@58 -- # cat - 00:17:39.485 15:04:02 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:39.485 15:04:02 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:39.485 15:04:02 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:39.485 15:04:02 -- fips/fips.sh@116 -- # openssl list -providers 00:17:39.485 15:04:02 -- fips/fips.sh@116 -- # grep name 00:17:39.485 15:04:02 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:39.485 15:04:02 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:39.485 15:04:02 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:39.485 15:04:02 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:39.485 15:04:02 -- fips/fips.sh@127 -- # : 00:17:39.485 15:04:02 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.485 15:04:02 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:39.485 15:04:02 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:39.485 15:04:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.485 15:04:02 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:39.485 15:04:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.485 15:04:02 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:39.485 15:04:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.485 15:04:02 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:39.485 15:04:02 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:39.485 15:04:02 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:39.485 Error setting digest 00:17:39.485 40C29AC9837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:39.485 40C29AC9837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:39.485 15:04:02 -- common/autotest_common.sh@653 -- # es=1 00:17:39.485 15:04:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.485 15:04:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.485 15:04:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.485 15:04:02 -- fips/fips.sh@130 -- # nvmftestinit 00:17:39.485 15:04:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:39.485 15:04:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.485 15:04:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:39.485 15:04:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:39.485 15:04:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:39.485 15:04:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.485 15:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.485 15:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.485 15:04:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:39.485 15:04:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:39.485 15:04:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:39.485 15:04:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:39.485 15:04:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:39.485 15:04:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:39.485 15:04:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.485 15:04:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.485 15:04:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:39.485 15:04:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:39.485 15:04:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.485 15:04:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.485 15:04:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.485 15:04:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.485 15:04:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.485 15:04:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.485 15:04:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.485 15:04:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.485 15:04:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:39.485 15:04:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:39.485 Cannot find device "nvmf_tgt_br" 00:17:39.485 15:04:02 -- nvmf/common.sh@154 -- # true 00:17:39.485 15:04:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.485 Cannot find device "nvmf_tgt_br2" 00:17:39.485 15:04:03 -- nvmf/common.sh@155 -- # true 00:17:39.485 15:04:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:39.485 15:04:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:39.485 Cannot find device "nvmf_tgt_br" 00:17:39.485 15:04:03 -- nvmf/common.sh@157 -- # true 00:17:39.485 15:04:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:39.485 Cannot find device "nvmf_tgt_br2" 00:17:39.485 15:04:03 -- nvmf/common.sh@158 -- # true 00:17:39.485 15:04:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:39.485 15:04:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:39.744 15:04:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.744 15:04:03 -- nvmf/common.sh@161 -- # true 00:17:39.744 15:04:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.744 15:04:03 -- nvmf/common.sh@162 -- # true 00:17:39.744 15:04:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.744 15:04:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.744 15:04:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.744 15:04:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:39.744 15:04:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:39.744 15:04:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:39.744 15:04:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:39.744 15:04:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:39.744 15:04:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:39.744 15:04:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:39.744 15:04:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:39.744 15:04:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:39.744 15:04:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:39.744 15:04:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.744 15:04:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.744 15:04:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.744 15:04:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:39.744 15:04:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:39.744 15:04:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.744 15:04:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.744 15:04:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.744 15:04:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.744 15:04:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.744 15:04:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:39.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:17:39.744 00:17:39.744 --- 10.0.0.2 ping statistics --- 00:17:39.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.744 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:39.744 15:04:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:39.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:39.744 00:17:39.744 --- 10.0.0.3 ping statistics --- 00:17:39.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.744 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:39.744 15:04:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:39.744 00:17:39.744 --- 10.0.0.1 ping statistics --- 00:17:39.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.744 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:39.744 15:04:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.744 15:04:03 -- nvmf/common.sh@421 -- # return 0 00:17:39.744 15:04:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:39.744 15:04:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.744 15:04:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:39.744 15:04:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:39.744 15:04:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.744 15:04:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:39.744 15:04:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:39.744 15:04:03 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:39.744 15:04:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:39.744 15:04:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.744 15:04:03 -- common/autotest_common.sh@10 -- # set +x 00:17:39.744 15:04:03 -- nvmf/common.sh@469 -- # nvmfpid=90144 00:17:39.744 15:04:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.744 15:04:03 -- nvmf/common.sh@470 -- # waitforlisten 90144 00:17:39.744 15:04:03 -- common/autotest_common.sh@829 -- # '[' -z 90144 ']' 00:17:39.744 15:04:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.744 15:04:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.744 15:04:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.744 15:04:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.744 15:04:03 -- common/autotest_common.sh@10 -- # set +x 00:17:40.003 [2024-11-18 15:04:03.394964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:40.003 [2024-11-18 15:04:03.395056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.003 [2024-11-18 15:04:03.537453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.262 [2024-11-18 15:04:03.616787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:40.262 [2024-11-18 15:04:03.616968] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.262 [2024-11-18 15:04:03.616987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.262 [2024-11-18 15:04:03.616999] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.262 [2024-11-18 15:04:03.617029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.829 15:04:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.829 15:04:04 -- common/autotest_common.sh@862 -- # return 0 00:17:40.829 15:04:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:40.829 15:04:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:40.829 15:04:04 -- common/autotest_common.sh@10 -- # set +x 00:17:41.087 15:04:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.087 15:04:04 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:41.087 15:04:04 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:41.087 15:04:04 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:41.087 15:04:04 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:41.087 15:04:04 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:41.087 15:04:04 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:41.087 15:04:04 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:41.087 15:04:04 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.346 [2024-11-18 15:04:04.705828] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.346 [2024-11-18 15:04:04.721805] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.346 [2024-11-18 15:04:04.721999] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.346 malloc0 00:17:41.346 15:04:04 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.346 15:04:04 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.346 15:04:04 -- fips/fips.sh@147 -- # bdevperf_pid=90202 00:17:41.346 15:04:04 -- fips/fips.sh@148 -- # waitforlisten 90202 /var/tmp/bdevperf.sock 00:17:41.346 15:04:04 -- common/autotest_common.sh@829 -- # '[' -z 90202 ']' 00:17:41.346 15:04:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.346 15:04:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.346 15:04:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.346 15:04:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.346 15:04:04 -- common/autotest_common.sh@10 -- # set +x 00:17:41.346 [2024-11-18 15:04:04.833994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.346 [2024-11-18 15:04:04.834061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90202 ] 00:17:41.605 [2024-11-18 15:04:04.968464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.605 [2024-11-18 15:04:05.036264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.541 15:04:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.541 15:04:05 -- common/autotest_common.sh@862 -- # return 0 00:17:42.542 15:04:05 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.542 [2024-11-18 15:04:06.066163] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.801 TLSTESTn1 00:17:42.801 15:04:06 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:42.801 Running I/O for 10 seconds... 00:17:52.774 00:17:52.774 Latency(us) 00:17:52.774 [2024-11-18T15:04:16.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.774 [2024-11-18T15:04:16.364Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:52.774 Verification LBA range: start 0x0 length 0x2000 00:17:52.774 TLSTESTn1 : 10.02 5955.43 23.26 0.00 0.00 21458.18 6345.08 23712.12 00:17:52.774 [2024-11-18T15:04:16.364Z] =================================================================================================================== 00:17:52.774 [2024-11-18T15:04:16.364Z] Total : 5955.43 23.26 0.00 0.00 21458.18 6345.08 23712.12 00:17:52.774 0 00:17:52.774 15:04:16 -- fips/fips.sh@1 -- # cleanup 00:17:52.774 15:04:16 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:52.774 15:04:16 -- common/autotest_common.sh@806 -- # type=--id 00:17:52.774 15:04:16 -- common/autotest_common.sh@807 -- # id=0 00:17:52.774 15:04:16 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:52.774 15:04:16 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:52.774 15:04:16 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:52.774 15:04:16 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:52.774 15:04:16 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:52.774 15:04:16 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:52.774 nvmf_trace.0 00:17:53.033 15:04:16 -- common/autotest_common.sh@821 -- # return 0 00:17:53.033 15:04:16 -- fips/fips.sh@16 -- # killprocess 90202 00:17:53.033 15:04:16 -- common/autotest_common.sh@936 -- # '[' -z 90202 ']' 00:17:53.033 15:04:16 -- common/autotest_common.sh@940 -- # kill -0 90202 00:17:53.033 15:04:16 -- common/autotest_common.sh@941 -- # uname 00:17:53.033 15:04:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.033 15:04:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90202 00:17:53.033 15:04:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:53.033 killing process with pid 90202 00:17:53.033 15:04:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:53.033 15:04:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90202' 00:17:53.033 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.033 00:17:53.033 Latency(us) 00:17:53.033 [2024-11-18T15:04:16.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.033 [2024-11-18T15:04:16.623Z] =================================================================================================================== 00:17:53.033 [2024-11-18T15:04:16.623Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.033 15:04:16 -- common/autotest_common.sh@955 -- # kill 90202 00:17:53.033 15:04:16 -- common/autotest_common.sh@960 -- # wait 90202 00:17:53.294 15:04:16 -- fips/fips.sh@17 -- # nvmftestfini 00:17:53.294 15:04:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:53.294 15:04:16 -- nvmf/common.sh@116 -- # sync 00:17:53.294 15:04:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:53.294 15:04:16 -- nvmf/common.sh@119 -- # set +e 00:17:53.294 15:04:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:53.294 15:04:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:53.294 rmmod nvme_tcp 00:17:53.294 rmmod nvme_fabrics 00:17:53.294 rmmod nvme_keyring 00:17:53.294 15:04:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:53.294 15:04:16 -- nvmf/common.sh@123 -- # set -e 00:17:53.294 15:04:16 -- nvmf/common.sh@124 -- # return 0 00:17:53.294 15:04:16 -- nvmf/common.sh@477 -- # '[' -n 90144 ']' 00:17:53.294 15:04:16 -- nvmf/common.sh@478 -- # killprocess 90144 00:17:53.294 15:04:16 -- common/autotest_common.sh@936 -- # '[' -z 90144 ']' 00:17:53.294 15:04:16 -- common/autotest_common.sh@940 -- # kill -0 90144 00:17:53.294 15:04:16 -- common/autotest_common.sh@941 -- # uname 00:17:53.294 15:04:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.294 15:04:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90144 00:17:53.294 15:04:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:53.294 15:04:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:53.294 killing process with pid 90144 00:17:53.294 15:04:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90144' 00:17:53.294 15:04:16 -- common/autotest_common.sh@955 -- # kill 90144 00:17:53.294 15:04:16 -- common/autotest_common.sh@960 -- # wait 90144 00:17:53.556 15:04:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:53.556 15:04:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:53.556 15:04:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:53.556 15:04:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.556 15:04:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:53.556 15:04:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.556 15:04:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.556 15:04:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.556 15:04:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:53.556 15:04:17 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:53.556 00:17:53.556 real 0m14.492s 00:17:53.556 user 0m18.634s 00:17:53.556 sys 0m6.516s 00:17:53.556 15:04:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:53.556 15:04:17 -- common/autotest_common.sh@10 -- # set +x 00:17:53.556 ************************************ 00:17:53.556 END TEST nvmf_fips 00:17:53.556 ************************************ 00:17:53.556 15:04:17 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:53.556 15:04:17 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:53.556 15:04:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:53.556 15:04:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.556 15:04:17 -- common/autotest_common.sh@10 -- # set +x 00:17:53.556 ************************************ 00:17:53.556 START TEST nvmf_fuzz 00:17:53.556 ************************************ 00:17:53.556 15:04:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:53.815 * Looking for test storage... 00:17:53.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:53.815 15:04:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:53.815 15:04:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:53.815 15:04:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:53.815 15:04:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:53.815 15:04:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:53.815 15:04:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:53.815 15:04:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:53.815 15:04:17 -- scripts/common.sh@335 -- # IFS=.-: 00:17:53.815 15:04:17 -- scripts/common.sh@335 -- # read -ra ver1 00:17:53.815 15:04:17 -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.815 15:04:17 -- scripts/common.sh@336 -- # read -ra ver2 00:17:53.815 15:04:17 -- scripts/common.sh@337 -- # local 'op=<' 00:17:53.815 15:04:17 -- scripts/common.sh@339 -- # ver1_l=2 00:17:53.815 15:04:17 -- scripts/common.sh@340 -- # ver2_l=1 00:17:53.815 15:04:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:53.815 15:04:17 -- scripts/common.sh@343 -- # case "$op" in 00:17:53.815 15:04:17 -- scripts/common.sh@344 -- # : 1 00:17:53.815 15:04:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:53.815 15:04:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.815 15:04:17 -- scripts/common.sh@364 -- # decimal 1 00:17:53.815 15:04:17 -- scripts/common.sh@352 -- # local d=1 00:17:53.815 15:04:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.815 15:04:17 -- scripts/common.sh@354 -- # echo 1 00:17:53.815 15:04:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:53.815 15:04:17 -- scripts/common.sh@365 -- # decimal 2 00:17:53.815 15:04:17 -- scripts/common.sh@352 -- # local d=2 00:17:53.815 15:04:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.815 15:04:17 -- scripts/common.sh@354 -- # echo 2 00:17:53.815 15:04:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:53.815 15:04:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:53.815 15:04:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:53.815 15:04:17 -- scripts/common.sh@367 -- # return 0 00:17:53.815 15:04:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.815 15:04:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:53.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.815 --rc genhtml_branch_coverage=1 00:17:53.815 --rc genhtml_function_coverage=1 00:17:53.815 --rc genhtml_legend=1 00:17:53.815 --rc geninfo_all_blocks=1 00:17:53.815 --rc geninfo_unexecuted_blocks=1 00:17:53.815 00:17:53.815 ' 00:17:53.815 15:04:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:53.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.815 --rc genhtml_branch_coverage=1 00:17:53.815 --rc genhtml_function_coverage=1 00:17:53.815 --rc genhtml_legend=1 00:17:53.815 --rc geninfo_all_blocks=1 00:17:53.815 --rc geninfo_unexecuted_blocks=1 00:17:53.815 00:17:53.815 ' 00:17:53.815 15:04:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:53.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.815 --rc genhtml_branch_coverage=1 00:17:53.815 --rc genhtml_function_coverage=1 00:17:53.815 --rc genhtml_legend=1 00:17:53.815 --rc geninfo_all_blocks=1 00:17:53.815 --rc geninfo_unexecuted_blocks=1 00:17:53.815 00:17:53.815 ' 00:17:53.815 15:04:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:53.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.815 --rc genhtml_branch_coverage=1 00:17:53.815 --rc genhtml_function_coverage=1 00:17:53.815 --rc genhtml_legend=1 00:17:53.815 --rc geninfo_all_blocks=1 00:17:53.815 --rc geninfo_unexecuted_blocks=1 00:17:53.815 00:17:53.815 ' 00:17:53.815 15:04:17 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:53.815 15:04:17 -- nvmf/common.sh@7 -- # uname -s 00:17:53.815 15:04:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.815 15:04:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.815 15:04:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.815 15:04:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.815 15:04:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.815 15:04:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.815 15:04:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.815 15:04:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.815 15:04:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.815 15:04:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.815 15:04:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:17:53.815 15:04:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:17:53.815 15:04:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.815 15:04:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.815 15:04:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:53.816 15:04:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:53.816 15:04:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.816 15:04:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.816 15:04:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.816 15:04:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.816 15:04:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.816 15:04:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.816 15:04:17 -- paths/export.sh@5 -- # export PATH 00:17:53.816 15:04:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.816 15:04:17 -- nvmf/common.sh@46 -- # : 0 00:17:53.816 15:04:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.816 15:04:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.816 15:04:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.816 15:04:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.816 15:04:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.816 15:04:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.816 15:04:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.816 15:04:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.816 15:04:17 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:53.816 15:04:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:53.816 15:04:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.816 15:04:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:53.816 15:04:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:53.816 15:04:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:53.816 15:04:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.816 15:04:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.816 15:04:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.816 15:04:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:53.816 15:04:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:53.816 15:04:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:53.816 15:04:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:53.816 15:04:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:53.816 15:04:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:53.816 15:04:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.816 15:04:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.816 15:04:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:53.816 15:04:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:53.816 15:04:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:53.816 15:04:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:53.816 15:04:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:53.816 15:04:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.816 15:04:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:53.816 15:04:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:53.816 15:04:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:53.816 15:04:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:53.816 15:04:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:53.816 15:04:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:53.816 Cannot find device "nvmf_tgt_br" 00:17:53.816 15:04:17 -- nvmf/common.sh@154 -- # true 00:17:53.816 15:04:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:53.816 Cannot find device "nvmf_tgt_br2" 00:17:53.816 15:04:17 -- nvmf/common.sh@155 -- # true 00:17:53.816 15:04:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:53.816 15:04:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:53.816 Cannot find device "nvmf_tgt_br" 00:17:53.816 15:04:17 -- nvmf/common.sh@157 -- # true 00:17:53.816 15:04:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:53.816 Cannot find device "nvmf_tgt_br2" 00:17:53.816 15:04:17 -- nvmf/common.sh@158 -- # true 00:17:53.816 15:04:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:54.075 15:04:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:54.075 15:04:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.075 15:04:17 -- nvmf/common.sh@161 -- # true 00:17:54.075 15:04:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.075 15:04:17 -- nvmf/common.sh@162 -- # true 00:17:54.075 15:04:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.075 15:04:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.075 15:04:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.075 15:04:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.075 15:04:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.075 15:04:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.075 15:04:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.075 15:04:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:54.075 15:04:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:54.075 15:04:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:54.075 15:04:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:54.075 15:04:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:54.075 15:04:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:54.075 15:04:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.075 15:04:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.075 15:04:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.075 15:04:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:54.075 15:04:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:54.075 15:04:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.075 15:04:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:54.075 15:04:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:54.075 15:04:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:54.075 15:04:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.075 15:04:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:54.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:17:54.075 00:17:54.075 --- 10.0.0.2 ping statistics --- 00:17:54.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.075 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:54.075 15:04:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:54.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:17:54.075 00:17:54.075 --- 10.0.0.3 ping statistics --- 00:17:54.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.075 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:54.075 15:04:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:54.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:54.075 00:17:54.075 --- 10.0.0.1 ping statistics --- 00:17:54.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.075 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:54.075 15:04:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.075 15:04:17 -- nvmf/common.sh@421 -- # return 0 00:17:54.075 15:04:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:54.075 15:04:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.075 15:04:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:54.075 15:04:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:54.075 15:04:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.075 15:04:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:54.075 15:04:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:54.334 15:04:17 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90547 00:17:54.334 15:04:17 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:54.334 15:04:17 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:54.334 15:04:17 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90547 00:17:54.334 15:04:17 -- common/autotest_common.sh@829 -- # '[' -z 90547 ']' 00:17:54.334 15:04:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.334 15:04:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.334 15:04:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.334 15:04:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.334 15:04:17 -- common/autotest_common.sh@10 -- # set +x 00:17:55.280 15:04:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.280 15:04:18 -- common/autotest_common.sh@862 -- # return 0 00:17:55.280 15:04:18 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:55.280 15:04:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.280 15:04:18 -- common/autotest_common.sh@10 -- # set +x 00:17:55.280 15:04:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.280 15:04:18 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:55.280 15:04:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.280 15:04:18 -- common/autotest_common.sh@10 -- # set +x 00:17:55.280 Malloc0 00:17:55.280 15:04:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.280 15:04:18 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.280 15:04:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.280 15:04:18 -- common/autotest_common.sh@10 -- # set +x 00:17:55.280 15:04:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.280 15:04:18 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:55.280 15:04:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.280 15:04:18 -- common/autotest_common.sh@10 -- # set +x 00:17:55.280 15:04:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.280 15:04:18 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.280 15:04:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.280 15:04:18 -- common/autotest_common.sh@10 -- # set +x 00:17:55.280 15:04:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.280 15:04:18 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:55.280 15:04:18 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:55.577 Shutting down the fuzz application 00:17:55.577 15:04:19 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:55.851 Shutting down the fuzz application 00:17:55.851 15:04:19 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.851 15:04:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.851 15:04:19 -- common/autotest_common.sh@10 -- # set +x 00:17:56.109 15:04:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.109 15:04:19 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:56.109 15:04:19 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:56.109 15:04:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:56.109 15:04:19 -- nvmf/common.sh@116 -- # sync 00:17:56.109 15:04:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:56.109 15:04:19 -- nvmf/common.sh@119 -- # set +e 00:17:56.109 15:04:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:56.109 15:04:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:56.109 rmmod nvme_tcp 00:17:56.109 rmmod nvme_fabrics 00:17:56.109 rmmod nvme_keyring 00:17:56.109 15:04:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:56.109 15:04:19 -- nvmf/common.sh@123 -- # set -e 00:17:56.109 15:04:19 -- nvmf/common.sh@124 -- # return 0 00:17:56.109 15:04:19 -- nvmf/common.sh@477 -- # '[' -n 90547 ']' 00:17:56.109 15:04:19 -- nvmf/common.sh@478 -- # killprocess 90547 00:17:56.109 15:04:19 -- common/autotest_common.sh@936 -- # '[' -z 90547 ']' 00:17:56.109 15:04:19 -- common/autotest_common.sh@940 -- # kill -0 90547 00:17:56.109 15:04:19 -- common/autotest_common.sh@941 -- # uname 00:17:56.109 15:04:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.109 15:04:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90547 00:17:56.109 15:04:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:56.109 15:04:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:56.109 killing process with pid 90547 00:17:56.109 15:04:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90547' 00:17:56.109 15:04:19 -- common/autotest_common.sh@955 -- # kill 90547 00:17:56.109 15:04:19 -- common/autotest_common.sh@960 -- # wait 90547 00:17:56.367 15:04:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:56.367 15:04:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:56.367 15:04:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:56.367 15:04:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.367 15:04:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:56.367 15:04:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.367 15:04:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.367 15:04:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.367 15:04:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:56.367 15:04:19 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:56.367 00:17:56.367 real 0m2.743s 00:17:56.367 user 0m2.813s 00:17:56.367 sys 0m0.693s 00:17:56.367 15:04:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:56.367 15:04:19 -- common/autotest_common.sh@10 -- # set +x 00:17:56.367 ************************************ 00:17:56.367 END TEST nvmf_fuzz 00:17:56.367 ************************************ 00:17:56.367 15:04:19 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:56.367 15:04:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:56.367 15:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:56.367 15:04:19 -- common/autotest_common.sh@10 -- # set +x 00:17:56.367 ************************************ 00:17:56.367 START TEST nvmf_multiconnection 00:17:56.367 ************************************ 00:17:56.367 15:04:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:56.625 * Looking for test storage... 00:17:56.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:56.625 15:04:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:56.625 15:04:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:56.625 15:04:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:56.625 15:04:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:56.626 15:04:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:56.626 15:04:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:56.626 15:04:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:56.626 15:04:20 -- scripts/common.sh@335 -- # IFS=.-: 00:17:56.626 15:04:20 -- scripts/common.sh@335 -- # read -ra ver1 00:17:56.626 15:04:20 -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.626 15:04:20 -- scripts/common.sh@336 -- # read -ra ver2 00:17:56.626 15:04:20 -- scripts/common.sh@337 -- # local 'op=<' 00:17:56.626 15:04:20 -- scripts/common.sh@339 -- # ver1_l=2 00:17:56.626 15:04:20 -- scripts/common.sh@340 -- # ver2_l=1 00:17:56.626 15:04:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:56.626 15:04:20 -- scripts/common.sh@343 -- # case "$op" in 00:17:56.626 15:04:20 -- scripts/common.sh@344 -- # : 1 00:17:56.626 15:04:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:56.626 15:04:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.626 15:04:20 -- scripts/common.sh@364 -- # decimal 1 00:17:56.626 15:04:20 -- scripts/common.sh@352 -- # local d=1 00:17:56.626 15:04:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.626 15:04:20 -- scripts/common.sh@354 -- # echo 1 00:17:56.626 15:04:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:56.626 15:04:20 -- scripts/common.sh@365 -- # decimal 2 00:17:56.626 15:04:20 -- scripts/common.sh@352 -- # local d=2 00:17:56.626 15:04:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.626 15:04:20 -- scripts/common.sh@354 -- # echo 2 00:17:56.626 15:04:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:56.626 15:04:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:56.626 15:04:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:56.626 15:04:20 -- scripts/common.sh@367 -- # return 0 00:17:56.626 15:04:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.626 15:04:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:56.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.626 --rc genhtml_branch_coverage=1 00:17:56.626 --rc genhtml_function_coverage=1 00:17:56.626 --rc genhtml_legend=1 00:17:56.626 --rc geninfo_all_blocks=1 00:17:56.626 --rc geninfo_unexecuted_blocks=1 00:17:56.626 00:17:56.626 ' 00:17:56.626 15:04:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:56.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.626 --rc genhtml_branch_coverage=1 00:17:56.626 --rc genhtml_function_coverage=1 00:17:56.626 --rc genhtml_legend=1 00:17:56.626 --rc geninfo_all_blocks=1 00:17:56.626 --rc geninfo_unexecuted_blocks=1 00:17:56.626 00:17:56.626 ' 00:17:56.626 15:04:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:56.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.626 --rc genhtml_branch_coverage=1 00:17:56.626 --rc genhtml_function_coverage=1 00:17:56.626 --rc genhtml_legend=1 00:17:56.626 --rc geninfo_all_blocks=1 00:17:56.626 --rc geninfo_unexecuted_blocks=1 00:17:56.626 00:17:56.626 ' 00:17:56.626 15:04:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:56.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.626 --rc genhtml_branch_coverage=1 00:17:56.626 --rc genhtml_function_coverage=1 00:17:56.626 --rc genhtml_legend=1 00:17:56.626 --rc geninfo_all_blocks=1 00:17:56.626 --rc geninfo_unexecuted_blocks=1 00:17:56.626 00:17:56.626 ' 00:17:56.626 15:04:20 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:56.626 15:04:20 -- nvmf/common.sh@7 -- # uname -s 00:17:56.626 15:04:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.626 15:04:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.626 15:04:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.626 15:04:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.626 15:04:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.626 15:04:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.626 15:04:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.626 15:04:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.626 15:04:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.626 15:04:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.626 15:04:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:17:56.626 15:04:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:17:56.626 15:04:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.626 15:04:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.626 15:04:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:56.626 15:04:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:56.626 15:04:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.626 15:04:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.626 15:04:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.626 15:04:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.626 15:04:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.626 15:04:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.626 15:04:20 -- paths/export.sh@5 -- # export PATH 00:17:56.626 15:04:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.626 15:04:20 -- nvmf/common.sh@46 -- # : 0 00:17:56.626 15:04:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:56.626 15:04:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:56.626 15:04:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:56.626 15:04:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.626 15:04:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.626 15:04:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:56.626 15:04:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:56.626 15:04:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:56.626 15:04:20 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:56.626 15:04:20 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:56.626 15:04:20 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:56.626 15:04:20 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:56.626 15:04:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:56.626 15:04:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.626 15:04:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:56.626 15:04:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:56.626 15:04:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:56.626 15:04:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.626 15:04:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.626 15:04:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.626 15:04:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:56.626 15:04:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:56.626 15:04:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:56.626 15:04:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:56.626 15:04:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:56.626 15:04:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:56.626 15:04:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.626 15:04:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.626 15:04:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:56.626 15:04:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:56.626 15:04:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:56.626 15:04:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:56.626 15:04:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:56.626 15:04:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.626 15:04:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:56.626 15:04:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:56.626 15:04:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:56.626 15:04:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:56.626 15:04:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:56.626 15:04:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:56.626 Cannot find device "nvmf_tgt_br" 00:17:56.626 15:04:20 -- nvmf/common.sh@154 -- # true 00:17:56.626 15:04:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:56.626 Cannot find device "nvmf_tgt_br2" 00:17:56.626 15:04:20 -- nvmf/common.sh@155 -- # true 00:17:56.626 15:04:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:56.626 15:04:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:56.626 Cannot find device "nvmf_tgt_br" 00:17:56.626 15:04:20 -- nvmf/common.sh@157 -- # true 00:17:56.626 15:04:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:56.626 Cannot find device "nvmf_tgt_br2" 00:17:56.626 15:04:20 -- nvmf/common.sh@158 -- # true 00:17:56.626 15:04:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:56.884 15:04:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:56.884 15:04:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:56.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.884 15:04:20 -- nvmf/common.sh@161 -- # true 00:17:56.884 15:04:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:56.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.884 15:04:20 -- nvmf/common.sh@162 -- # true 00:17:56.884 15:04:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:56.884 15:04:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:56.884 15:04:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:56.884 15:04:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:56.884 15:04:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:56.884 15:04:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:56.884 15:04:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:56.884 15:04:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:56.884 15:04:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:56.884 15:04:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:56.884 15:04:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:56.884 15:04:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:56.884 15:04:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:56.884 15:04:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:56.884 15:04:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:56.884 15:04:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:56.884 15:04:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:56.884 15:04:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:56.884 15:04:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:56.884 15:04:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:56.884 15:04:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:56.884 15:04:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:56.884 15:04:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.143 15:04:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:57.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:57.143 00:17:57.143 --- 10.0.0.2 ping statistics --- 00:17:57.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.143 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:57.143 15:04:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:57.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:17:57.143 00:17:57.143 --- 10.0.0.3 ping statistics --- 00:17:57.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.143 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:57.143 15:04:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:57.143 00:17:57.143 --- 10.0.0.1 ping statistics --- 00:17:57.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.143 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:57.143 15:04:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.143 15:04:20 -- nvmf/common.sh@421 -- # return 0 00:17:57.143 15:04:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.143 15:04:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.143 15:04:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:57.143 15:04:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:57.143 15:04:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.143 15:04:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:57.143 15:04:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:57.143 15:04:20 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:57.143 15:04:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:57.143 15:04:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.143 15:04:20 -- common/autotest_common.sh@10 -- # set +x 00:17:57.143 15:04:20 -- nvmf/common.sh@469 -- # nvmfpid=90766 00:17:57.143 15:04:20 -- nvmf/common.sh@470 -- # waitforlisten 90766 00:17:57.143 15:04:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.143 15:04:20 -- common/autotest_common.sh@829 -- # '[' -z 90766 ']' 00:17:57.143 15:04:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.143 15:04:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.143 15:04:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.143 15:04:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.143 15:04:20 -- common/autotest_common.sh@10 -- # set +x 00:17:57.143 [2024-11-18 15:04:20.563075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:57.143 [2024-11-18 15:04:20.563166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.143 [2024-11-18 15:04:20.702965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.402 [2024-11-18 15:04:20.762187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.402 [2024-11-18 15:04:20.762360] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.402 [2024-11-18 15:04:20.762373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.402 [2024-11-18 15:04:20.762383] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.402 [2024-11-18 15:04:20.762457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.402 [2024-11-18 15:04:20.762538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.402 [2024-11-18 15:04:20.762681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.402 [2024-11-18 15:04:20.762698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.338 15:04:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.338 15:04:21 -- common/autotest_common.sh@862 -- # return 0 00:17:58.338 15:04:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.338 15:04:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.338 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.338 15:04:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.338 15:04:21 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.338 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.338 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.338 [2024-11-18 15:04:21.636969] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.338 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.338 15:04:21 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:58.338 15:04:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.339 15:04:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 Malloc1 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 [2024-11-18 15:04:21.711419] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.339 15:04:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 Malloc2 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.339 15:04:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 Malloc3 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.339 15:04:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 Malloc4 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.339 15:04:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 Malloc5 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.339 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.339 15:04:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.339 15:04:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:58.339 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.339 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 Malloc6 00:17:58.598 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.598 15:04:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:58.598 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.598 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.598 15:04:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:58.598 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.598 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.598 15:04:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:58.598 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.598 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.598 15:04:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.598 15:04:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:58.598 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.598 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 Malloc7 00:17:58.598 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.598 15:04:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:58.598 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.598 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:58.599 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:58.599 15:04:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.599 15:04:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 Malloc8 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.599 15:04:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 Malloc9 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.599 15:04:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 Malloc10 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 15:04:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.599 15:04:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:58.599 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.858 Malloc11 00:17:58.858 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.858 15:04:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:58.858 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.858 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.858 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.858 15:04:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:58.858 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.858 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.858 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.858 15:04:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:58.858 15:04:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.858 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.858 15:04:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.858 15:04:22 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:58.858 15:04:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.858 15:04:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.858 15:04:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:58.858 15:04:22 -- common/autotest_common.sh@1187 -- # local i=0 00:17:58.858 15:04:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.858 15:04:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:58.858 15:04:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:01.391 15:04:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:01.391 15:04:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:01.391 15:04:24 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:01.391 15:04:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:01.391 15:04:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.391 15:04:24 -- common/autotest_common.sh@1197 -- # return 0 00:18:01.391 15:04:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.391 15:04:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:01.391 15:04:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:01.391 15:04:24 -- common/autotest_common.sh@1187 -- # local i=0 00:18:01.391 15:04:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.391 15:04:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:01.391 15:04:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:03.296 15:04:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:03.296 15:04:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:03.296 15:04:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:03.296 15:04:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:03.296 15:04:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.296 15:04:26 -- common/autotest_common.sh@1197 -- # return 0 00:18:03.296 15:04:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:03.296 15:04:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:03.296 15:04:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:03.296 15:04:26 -- common/autotest_common.sh@1187 -- # local i=0 00:18:03.296 15:04:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.296 15:04:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:03.296 15:04:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:05.828 15:04:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:05.828 15:04:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:05.828 15:04:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:05.828 15:04:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:05.828 15:04:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.828 15:04:28 -- common/autotest_common.sh@1197 -- # return 0 00:18:05.828 15:04:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.828 15:04:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:05.828 15:04:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:05.828 15:04:28 -- common/autotest_common.sh@1187 -- # local i=0 00:18:05.828 15:04:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.828 15:04:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:05.828 15:04:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:07.732 15:04:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:07.732 15:04:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:07.732 15:04:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:07.732 15:04:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:07.732 15:04:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.732 15:04:31 -- common/autotest_common.sh@1197 -- # return 0 00:18:07.732 15:04:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:07.732 15:04:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:07.732 15:04:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:07.732 15:04:31 -- common/autotest_common.sh@1187 -- # local i=0 00:18:07.732 15:04:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.732 15:04:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:07.732 15:04:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:09.636 15:04:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:09.637 15:04:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:09.637 15:04:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:09.637 15:04:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:09.637 15:04:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.637 15:04:33 -- common/autotest_common.sh@1197 -- # return 0 00:18:09.637 15:04:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.637 15:04:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:09.896 15:04:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:09.896 15:04:33 -- common/autotest_common.sh@1187 -- # local i=0 00:18:09.896 15:04:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.896 15:04:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:09.896 15:04:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:12.430 15:04:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:12.430 15:04:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:12.430 15:04:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:12.430 15:04:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:12.430 15:04:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.430 15:04:35 -- common/autotest_common.sh@1197 -- # return 0 00:18:12.430 15:04:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.430 15:04:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:12.430 15:04:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:12.430 15:04:35 -- common/autotest_common.sh@1187 -- # local i=0 00:18:12.430 15:04:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.430 15:04:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:12.430 15:04:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:14.334 15:04:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:14.334 15:04:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:14.334 15:04:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:14.334 15:04:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:14.334 15:04:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.334 15:04:37 -- common/autotest_common.sh@1197 -- # return 0 00:18:14.334 15:04:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.334 15:04:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:14.334 15:04:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:14.334 15:04:37 -- common/autotest_common.sh@1187 -- # local i=0 00:18:14.334 15:04:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.334 15:04:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:14.334 15:04:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:16.237 15:04:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:16.237 15:04:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:16.237 15:04:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:16.237 15:04:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:16.237 15:04:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.237 15:04:39 -- common/autotest_common.sh@1197 -- # return 0 00:18:16.237 15:04:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.237 15:04:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:16.496 15:04:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:16.496 15:04:39 -- common/autotest_common.sh@1187 -- # local i=0 00:18:16.496 15:04:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.496 15:04:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:16.496 15:04:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:19.035 15:04:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:19.036 15:04:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:19.036 15:04:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:19.036 15:04:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:19.036 15:04:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.036 15:04:42 -- common/autotest_common.sh@1197 -- # return 0 00:18:19.036 15:04:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.036 15:04:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:19.036 15:04:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:19.036 15:04:42 -- common/autotest_common.sh@1187 -- # local i=0 00:18:19.036 15:04:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.036 15:04:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:19.036 15:04:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:20.943 15:04:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:20.943 15:04:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:20.943 15:04:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:20.943 15:04:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:20.943 15:04:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.944 15:04:44 -- common/autotest_common.sh@1197 -- # return 0 00:18:20.944 15:04:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.944 15:04:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:20.944 15:04:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:20.944 15:04:44 -- common/autotest_common.sh@1187 -- # local i=0 00:18:20.944 15:04:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.944 15:04:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:20.944 15:04:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:22.847 15:04:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:22.847 15:04:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:22.847 15:04:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:23.107 15:04:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:23.107 15:04:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.107 15:04:46 -- common/autotest_common.sh@1197 -- # return 0 00:18:23.107 15:04:46 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:23.107 [global] 00:18:23.107 thread=1 00:18:23.107 invalidate=1 00:18:23.107 rw=read 00:18:23.107 time_based=1 00:18:23.107 runtime=10 00:18:23.107 ioengine=libaio 00:18:23.107 direct=1 00:18:23.107 bs=262144 00:18:23.107 iodepth=64 00:18:23.107 norandommap=1 00:18:23.107 numjobs=1 00:18:23.107 00:18:23.107 [job0] 00:18:23.107 filename=/dev/nvme0n1 00:18:23.107 [job1] 00:18:23.107 filename=/dev/nvme10n1 00:18:23.107 [job2] 00:18:23.107 filename=/dev/nvme1n1 00:18:23.107 [job3] 00:18:23.107 filename=/dev/nvme2n1 00:18:23.107 [job4] 00:18:23.107 filename=/dev/nvme3n1 00:18:23.107 [job5] 00:18:23.107 filename=/dev/nvme4n1 00:18:23.107 [job6] 00:18:23.107 filename=/dev/nvme5n1 00:18:23.107 [job7] 00:18:23.107 filename=/dev/nvme6n1 00:18:23.107 [job8] 00:18:23.107 filename=/dev/nvme7n1 00:18:23.107 [job9] 00:18:23.107 filename=/dev/nvme8n1 00:18:23.107 [job10] 00:18:23.107 filename=/dev/nvme9n1 00:18:23.107 Could not set queue depth (nvme0n1) 00:18:23.107 Could not set queue depth (nvme10n1) 00:18:23.107 Could not set queue depth (nvme1n1) 00:18:23.107 Could not set queue depth (nvme2n1) 00:18:23.107 Could not set queue depth (nvme3n1) 00:18:23.107 Could not set queue depth (nvme4n1) 00:18:23.107 Could not set queue depth (nvme5n1) 00:18:23.107 Could not set queue depth (nvme6n1) 00:18:23.107 Could not set queue depth (nvme7n1) 00:18:23.107 Could not set queue depth (nvme8n1) 00:18:23.107 Could not set queue depth (nvme9n1) 00:18:23.366 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.366 fio-3.35 00:18:23.366 Starting 11 threads 00:18:35.578 00:18:35.578 job0: (groupid=0, jobs=1): err= 0: pid=91244: Mon Nov 18 15:04:57 2024 00:18:35.578 read: IOPS=484, BW=121MiB/s (127MB/s)(1226MiB/10112msec) 00:18:35.578 slat (usec): min=20, max=90044, avg=2036.23, stdev=7664.54 00:18:35.578 clat (msec): min=41, max=268, avg=129.65, stdev=20.70 00:18:35.578 lat (msec): min=41, max=268, avg=131.68, stdev=22.01 00:18:35.578 clat percentiles (msec): 00:18:35.578 | 1.00th=[ 89], 5.00th=[ 103], 10.00th=[ 110], 20.00th=[ 117], 00:18:35.578 | 30.00th=[ 121], 40.00th=[ 125], 50.00th=[ 129], 60.00th=[ 132], 00:18:35.578 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 153], 95.00th=[ 167], 00:18:35.578 | 99.00th=[ 197], 99.50th=[ 220], 99.90th=[ 262], 99.95th=[ 268], 00:18:35.578 | 99.99th=[ 268] 00:18:35.578 bw ( KiB/s): min=92160, max=147968, per=7.91%, avg=123890.65, stdev=12122.11, samples=20 00:18:35.578 iops : min= 360, max= 578, avg=483.80, stdev=47.26, samples=20 00:18:35.578 lat (msec) : 50=0.10%, 100=3.89%, 250=95.82%, 500=0.18% 00:18:35.578 cpu : usr=0.25%, sys=1.75%, ctx=939, majf=0, minf=4097 00:18:35.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:35.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.578 issued rwts: total=4904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.578 job1: (groupid=0, jobs=1): err= 0: pid=91245: Mon Nov 18 15:04:57 2024 00:18:35.578 read: IOPS=713, BW=178MiB/s (187MB/s)(1804MiB/10114msec) 00:18:35.578 slat (usec): min=13, max=103869, avg=1369.04, stdev=5277.53 00:18:35.578 clat (msec): min=15, max=304, avg=88.22, stdev=29.46 00:18:35.578 lat (msec): min=15, max=304, avg=89.59, stdev=30.14 00:18:35.578 clat percentiles (msec): 00:18:35.578 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 40], 20.00th=[ 71], 00:18:35.578 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 96], 00:18:35.578 | 70.00th=[ 101], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 138], 00:18:35.578 | 99.00th=[ 171], 99.50th=[ 184], 99.90th=[ 264], 99.95th=[ 264], 00:18:35.578 | 99.99th=[ 305] 00:18:35.578 bw ( KiB/s): min=94208, max=399808, per=11.68%, avg=182904.80, stdev=62124.56, samples=20 00:18:35.578 iops : min= 368, max= 1561, avg=714.30, stdev=242.57, samples=20 00:18:35.578 lat (msec) : 20=0.21%, 50=12.91%, 100=56.13%, 250=30.65%, 500=0.11% 00:18:35.578 cpu : usr=0.25%, sys=2.08%, ctx=1329, majf=0, minf=4097 00:18:35.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:35.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.578 issued rwts: total=7214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.578 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.578 job2: (groupid=0, jobs=1): err= 0: pid=91246: Mon Nov 18 15:04:57 2024 00:18:35.578 read: IOPS=477, BW=119MiB/s (125MB/s)(1205MiB/10091msec) 00:18:35.578 slat (usec): min=12, max=80598, avg=2021.08, stdev=7210.33 00:18:35.578 clat (msec): min=23, max=210, avg=131.79, stdev=18.86 00:18:35.578 lat (msec): min=24, max=223, avg=133.81, stdev=20.04 00:18:35.578 clat percentiles (msec): 00:18:35.578 | 1.00th=[ 67], 5.00th=[ 105], 10.00th=[ 113], 20.00th=[ 121], 00:18:35.578 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 132], 60.00th=[ 136], 00:18:35.578 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 153], 95.00th=[ 161], 00:18:35.578 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 201], 99.95th=[ 209], 00:18:35.579 | 99.99th=[ 211] 00:18:35.579 bw ( KiB/s): min=108838, max=135168, per=7.77%, avg=121692.10, stdev=7603.17, samples=20 00:18:35.579 iops : min= 425, max= 528, avg=475.30, stdev=29.77, samples=20 00:18:35.579 lat (msec) : 50=0.31%, 100=2.89%, 250=96.80% 00:18:35.579 cpu : usr=0.18%, sys=1.75%, ctx=792, majf=0, minf=4097 00:18:35.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.579 issued rwts: total=4818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.579 job3: (groupid=0, jobs=1): err= 0: pid=91247: Mon Nov 18 15:04:57 2024 00:18:35.579 read: IOPS=696, BW=174MiB/s (183MB/s)(1750MiB/10052msec) 00:18:35.579 slat (usec): min=14, max=86562, avg=1403.11, stdev=5143.53 00:18:35.579 clat (msec): min=34, max=196, avg=90.33, stdev=21.84 00:18:35.579 lat (msec): min=35, max=219, avg=91.74, stdev=22.53 00:18:35.579 clat percentiles (msec): 00:18:35.579 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 67], 20.00th=[ 74], 00:18:35.579 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 90], 60.00th=[ 94], 00:18:35.579 | 70.00th=[ 97], 80.00th=[ 102], 90.00th=[ 111], 95.00th=[ 130], 00:18:35.579 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 188], 99.95th=[ 197], 00:18:35.579 | 99.99th=[ 197] 00:18:35.579 bw ( KiB/s): min=115200, max=260608, per=11.34%, avg=177448.25, stdev=31814.28, samples=20 00:18:35.579 iops : min= 450, max= 1018, avg=693.00, stdev=124.24, samples=20 00:18:35.579 lat (msec) : 50=1.84%, 100=75.38%, 250=22.77% 00:18:35.579 cpu : usr=0.17%, sys=2.10%, ctx=1417, majf=0, minf=4097 00:18:35.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.579 issued rwts: total=6999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.579 job4: (groupid=0, jobs=1): err= 0: pid=91248: Mon Nov 18 15:04:57 2024 00:18:35.579 read: IOPS=719, BW=180MiB/s (189MB/s)(1806MiB/10040msec) 00:18:35.579 slat (usec): min=20, max=62082, avg=1324.99, stdev=5095.84 00:18:35.579 clat (msec): min=19, max=204, avg=87.49, stdev=23.61 00:18:35.579 lat (msec): min=19, max=204, avg=88.81, stdev=24.19 00:18:35.579 clat percentiles (msec): 00:18:35.579 | 1.00th=[ 29], 5.00th=[ 37], 10.00th=[ 56], 20.00th=[ 70], 00:18:35.579 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 96], 00:18:35.579 | 70.00th=[ 101], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 117], 00:18:35.579 | 99.00th=[ 161], 99.50th=[ 171], 99.90th=[ 184], 99.95th=[ 184], 00:18:35.579 | 99.99th=[ 205] 00:18:35.579 bw ( KiB/s): min=141312, max=336896, per=11.71%, avg=183281.00, stdev=44725.87, samples=20 00:18:35.579 iops : min= 552, max= 1316, avg=715.80, stdev=174.77, samples=20 00:18:35.579 lat (msec) : 20=0.03%, 50=7.83%, 100=61.92%, 250=30.21% 00:18:35.579 cpu : usr=0.24%, sys=2.31%, ctx=1249, majf=0, minf=4097 00:18:35.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.579 issued rwts: total=7225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.579 job5: (groupid=0, jobs=1): err= 0: pid=91249: Mon Nov 18 15:04:57 2024 00:18:35.579 read: IOPS=501, BW=125MiB/s (132MB/s)(1266MiB/10093msec) 00:18:35.579 slat (usec): min=15, max=68747, avg=1903.00, stdev=6529.31 00:18:35.579 clat (msec): min=9, max=214, avg=125.50, stdev=28.23 00:18:35.579 lat (msec): min=9, max=234, avg=127.40, stdev=29.20 00:18:35.579 clat percentiles (msec): 00:18:35.579 | 1.00th=[ 26], 5.00th=[ 50], 10.00th=[ 99], 20.00th=[ 114], 00:18:35.579 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 131], 60.00th=[ 136], 00:18:35.579 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 150], 95.00th=[ 157], 00:18:35.579 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 199], 99.95th=[ 199], 00:18:35.579 | 99.99th=[ 215] 00:18:35.579 bw ( KiB/s): min=108032, max=239648, per=8.17%, avg=127926.85, stdev=27479.13, samples=20 00:18:35.579 iops : min= 422, max= 936, avg=499.65, stdev=107.30, samples=20 00:18:35.579 lat (msec) : 10=0.12%, 20=0.34%, 50=4.80%, 100=5.27%, 250=89.47% 00:18:35.579 cpu : usr=0.14%, sys=1.74%, ctx=1027, majf=0, minf=4097 00:18:35.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.579 issued rwts: total=5063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.579 job6: (groupid=0, jobs=1): err= 0: pid=91250: Mon Nov 18 15:04:57 2024 00:18:35.579 read: IOPS=518, BW=130MiB/s (136MB/s)(1311MiB/10106msec) 00:18:35.579 slat (usec): min=13, max=105770, avg=1856.05, stdev=7482.71 00:18:35.579 clat (msec): min=39, max=252, avg=121.32, stdev=28.54 00:18:35.579 lat (msec): min=39, max=277, avg=123.17, stdev=29.68 00:18:35.579 clat percentiles (msec): 00:18:35.579 | 1.00th=[ 56], 5.00th=[ 65], 10.00th=[ 71], 20.00th=[ 109], 00:18:35.579 | 30.00th=[ 118], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 131], 00:18:35.579 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 150], 95.00th=[ 157], 00:18:35.579 | 99.00th=[ 180], 99.50th=[ 213], 99.90th=[ 253], 99.95th=[ 253], 00:18:35.579 | 99.99th=[ 253] 00:18:35.579 bw ( KiB/s): min=97280, max=233005, per=8.47%, avg=132630.95, stdev=29907.01, samples=20 00:18:35.579 iops : min= 380, max= 910, avg=517.95, stdev=116.78, samples=20 00:18:35.579 lat (msec) : 50=0.48%, 100=17.36%, 250=81.84%, 500=0.32% 00:18:35.579 cpu : usr=0.27%, sys=1.47%, ctx=1046, majf=0, minf=4097 00:18:35.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.579 issued rwts: total=5243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.579 job7: (groupid=0, jobs=1): err= 0: pid=91251: Mon Nov 18 15:04:57 2024 00:18:35.579 read: IOPS=462, BW=116MiB/s (121MB/s)(1167MiB/10094msec) 00:18:35.579 slat (usec): min=15, max=71229, avg=2104.11, stdev=6914.83 00:18:35.579 clat (msec): min=27, max=229, avg=136.05, stdev=19.66 00:18:35.579 lat (msec): min=28, max=229, avg=138.15, stdev=20.79 00:18:35.579 clat percentiles (msec): 00:18:35.579 | 1.00th=[ 70], 5.00th=[ 106], 10.00th=[ 116], 20.00th=[ 126], 00:18:35.579 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 136], 60.00th=[ 140], 00:18:35.579 | 70.00th=[ 144], 80.00th=[ 148], 90.00th=[ 157], 95.00th=[ 165], 00:18:35.579 | 99.00th=[ 186], 99.50th=[ 203], 99.90th=[ 230], 99.95th=[ 230], 00:18:35.579 | 99.99th=[ 230] 00:18:35.579 bw ( KiB/s): min=104448, max=140288, per=7.53%, avg=117818.15, stdev=9085.83, samples=20 00:18:35.579 iops : min= 408, max= 548, avg=460.20, stdev=35.50, samples=20 00:18:35.579 lat (msec) : 50=0.13%, 100=3.64%, 250=96.23% 00:18:35.579 cpu : usr=0.18%, sys=1.84%, ctx=894, majf=0, minf=4097 00:18:35.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.579 issued rwts: total=4668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.579 job8: (groupid=0, jobs=1): err= 0: pid=91252: Mon Nov 18 15:04:57 2024 00:18:35.579 read: IOPS=501, BW=125MiB/s (132MB/s)(1269MiB/10115msec) 00:18:35.579 slat (usec): min=15, max=101150, avg=1950.96, stdev=7554.92 00:18:35.579 clat (msec): min=29, max=263, avg=125.33, stdev=25.83 00:18:35.579 lat (msec): min=30, max=263, avg=127.28, stdev=27.07 00:18:35.579 clat percentiles (msec): 00:18:35.579 | 1.00th=[ 62], 5.00th=[ 72], 10.00th=[ 86], 20.00th=[ 111], 00:18:35.579 | 30.00th=[ 121], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 132], 00:18:35.579 | 70.00th=[ 136], 80.00th=[ 142], 90.00th=[ 150], 95.00th=[ 161], 00:18:35.579 | 99.00th=[ 199], 99.50th=[ 218], 99.90th=[ 264], 99.95th=[ 264], 00:18:35.579 | 99.99th=[ 264] 00:18:35.579 bw ( KiB/s): min=96256, max=217088, per=8.19%, avg=128204.85, stdev=23742.35, samples=20 00:18:35.579 iops : min= 376, max= 848, avg=500.75, stdev=92.72, samples=20 00:18:35.579 lat (msec) : 50=0.59%, 100=12.63%, 250=86.66%, 500=0.12% 00:18:35.579 cpu : usr=0.19%, sys=1.70%, ctx=874, majf=0, minf=4097 00:18:35.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.579 issued rwts: total=5075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.579 job9: (groupid=0, jobs=1): err= 0: pid=91253: Mon Nov 18 15:04:57 2024 00:18:35.579 read: IOPS=512, BW=128MiB/s (134MB/s)(1293MiB/10088msec) 00:18:35.579 slat (usec): min=11, max=71124, avg=1825.61, stdev=6326.55 00:18:35.579 clat (msec): min=12, max=219, avg=122.71, stdev=30.27 00:18:35.579 lat (msec): min=12, max=219, avg=124.54, stdev=31.25 00:18:35.579 clat percentiles (msec): 00:18:35.579 | 1.00th=[ 40], 5.00th=[ 58], 10.00th=[ 73], 20.00th=[ 103], 00:18:35.579 | 30.00th=[ 120], 40.00th=[ 127], 50.00th=[ 132], 60.00th=[ 136], 00:18:35.579 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 150], 95.00th=[ 157], 00:18:35.579 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 220], 99.95th=[ 220], 00:18:35.579 | 99.99th=[ 220] 00:18:35.579 bw ( KiB/s): min=106496, max=199168, per=8.35%, avg=130757.80, stdev=28778.29, samples=20 00:18:35.579 iops : min= 416, max= 778, avg=510.60, stdev=112.43, samples=20 00:18:35.579 lat (msec) : 20=0.31%, 50=2.42%, 100=15.53%, 250=81.75% 00:18:35.579 cpu : usr=0.14%, sys=1.73%, ctx=1007, majf=0, minf=4097 00:18:35.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.579 issued rwts: total=5172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.579 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.580 job10: (groupid=0, jobs=1): err= 0: pid=91254: Mon Nov 18 15:04:57 2024 00:18:35.580 read: IOPS=540, BW=135MiB/s (142MB/s)(1367MiB/10108msec) 00:18:35.580 slat (usec): min=13, max=73756, avg=1788.44, stdev=6208.69 00:18:35.580 clat (msec): min=8, max=254, avg=116.27, stdev=35.32 00:18:35.580 lat (msec): min=9, max=254, avg=118.06, stdev=36.18 00:18:35.580 clat percentiles (msec): 00:18:35.580 | 1.00th=[ 34], 5.00th=[ 53], 10.00th=[ 64], 20.00th=[ 74], 00:18:35.580 | 30.00th=[ 111], 40.00th=[ 123], 50.00th=[ 128], 60.00th=[ 133], 00:18:35.580 | 70.00th=[ 138], 80.00th=[ 142], 90.00th=[ 150], 95.00th=[ 157], 00:18:35.580 | 99.00th=[ 184], 99.50th=[ 213], 99.90th=[ 247], 99.95th=[ 255], 00:18:35.580 | 99.99th=[ 255] 00:18:35.580 bw ( KiB/s): min=101376, max=255489, per=8.84%, avg=138353.45, stdev=44732.34, samples=20 00:18:35.580 iops : min= 396, max= 998, avg=540.15, stdev=174.70, samples=20 00:18:35.580 lat (msec) : 10=0.07%, 50=4.63%, 100=21.25%, 250=73.96%, 500=0.09% 00:18:35.580 cpu : usr=0.15%, sys=1.59%, ctx=1021, majf=0, minf=4097 00:18:35.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:35.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.580 issued rwts: total=5468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.580 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.580 00:18:35.580 Run status group 0 (all jobs): 00:18:35.580 READ: bw=1529MiB/s (1603MB/s), 116MiB/s-180MiB/s (121MB/s-189MB/s), io=15.1GiB (16.2GB), run=10040-10115msec 00:18:35.580 00:18:35.580 Disk stats (read/write): 00:18:35.580 nvme0n1: ios=9686/0, merge=0/0, ticks=1235816/0, in_queue=1235816, util=97.66% 00:18:35.580 nvme10n1: ios=14308/0, merge=0/0, ticks=1232689/0, in_queue=1232689, util=97.78% 00:18:35.580 nvme1n1: ios=9509/0, merge=0/0, ticks=1239048/0, in_queue=1239048, util=97.78% 00:18:35.580 nvme2n1: ios=13870/0, merge=0/0, ticks=1240782/0, in_queue=1240782, util=97.91% 00:18:35.580 nvme3n1: ios=14323/0, merge=0/0, ticks=1240631/0, in_queue=1240631, util=97.93% 00:18:35.580 nvme4n1: ios=10005/0, merge=0/0, ticks=1241016/0, in_queue=1241016, util=98.35% 00:18:35.580 nvme5n1: ios=10358/0, merge=0/0, ticks=1236371/0, in_queue=1236371, util=98.25% 00:18:35.580 nvme6n1: ios=9236/0, merge=0/0, ticks=1241470/0, in_queue=1241470, util=98.55% 00:18:35.580 nvme7n1: ios=10024/0, merge=0/0, ticks=1233592/0, in_queue=1233592, util=98.76% 00:18:35.580 nvme8n1: ios=10217/0, merge=0/0, ticks=1234661/0, in_queue=1234661, util=98.73% 00:18:35.580 nvme9n1: ios=10809/0, merge=0/0, ticks=1237081/0, in_queue=1237081, util=98.78% 00:18:35.580 15:04:57 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:35.580 [global] 00:18:35.580 thread=1 00:18:35.580 invalidate=1 00:18:35.580 rw=randwrite 00:18:35.580 time_based=1 00:18:35.580 runtime=10 00:18:35.580 ioengine=libaio 00:18:35.580 direct=1 00:18:35.580 bs=262144 00:18:35.580 iodepth=64 00:18:35.580 norandommap=1 00:18:35.580 numjobs=1 00:18:35.580 00:18:35.580 [job0] 00:18:35.580 filename=/dev/nvme0n1 00:18:35.580 [job1] 00:18:35.580 filename=/dev/nvme10n1 00:18:35.580 [job2] 00:18:35.580 filename=/dev/nvme1n1 00:18:35.580 [job3] 00:18:35.580 filename=/dev/nvme2n1 00:18:35.580 [job4] 00:18:35.580 filename=/dev/nvme3n1 00:18:35.580 [job5] 00:18:35.580 filename=/dev/nvme4n1 00:18:35.580 [job6] 00:18:35.580 filename=/dev/nvme5n1 00:18:35.580 [job7] 00:18:35.580 filename=/dev/nvme6n1 00:18:35.580 [job8] 00:18:35.580 filename=/dev/nvme7n1 00:18:35.580 [job9] 00:18:35.580 filename=/dev/nvme8n1 00:18:35.580 [job10] 00:18:35.580 filename=/dev/nvme9n1 00:18:35.580 Could not set queue depth (nvme0n1) 00:18:35.580 Could not set queue depth (nvme10n1) 00:18:35.580 Could not set queue depth (nvme1n1) 00:18:35.580 Could not set queue depth (nvme2n1) 00:18:35.580 Could not set queue depth (nvme3n1) 00:18:35.580 Could not set queue depth (nvme4n1) 00:18:35.580 Could not set queue depth (nvme5n1) 00:18:35.580 Could not set queue depth (nvme6n1) 00:18:35.580 Could not set queue depth (nvme7n1) 00:18:35.580 Could not set queue depth (nvme8n1) 00:18:35.580 Could not set queue depth (nvme9n1) 00:18:35.580 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.580 fio-3.35 00:18:35.580 Starting 11 threads 00:18:45.599 00:18:45.599 job0: (groupid=0, jobs=1): err= 0: pid=91449: Mon Nov 18 15:05:08 2024 00:18:45.599 write: IOPS=241, BW=60.4MiB/s (63.3MB/s)(617MiB/10217msec); 0 zone resets 00:18:45.599 slat (usec): min=18, max=64061, avg=4046.29, stdev=7300.25 00:18:45.599 clat (msec): min=66, max=481, avg=260.77, stdev=30.33 00:18:45.599 lat (msec): min=66, max=481, avg=264.81, stdev=29.85 00:18:45.599 clat percentiles (msec): 00:18:45.599 | 1.00th=[ 142], 5.00th=[ 234], 10.00th=[ 241], 20.00th=[ 249], 00:18:45.600 | 30.00th=[ 253], 40.00th=[ 257], 50.00th=[ 259], 60.00th=[ 264], 00:18:45.600 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 296], 00:18:45.600 | 99.00th=[ 384], 99.50th=[ 430], 99.90th=[ 464], 99.95th=[ 481], 00:18:45.600 | 99.99th=[ 481] 00:18:45.600 bw ( KiB/s): min=53248, max=65536, per=5.30%, avg=61562.45, stdev=3349.40, samples=20 00:18:45.600 iops : min= 208, max= 256, avg=240.45, stdev=13.14, samples=20 00:18:45.600 lat (msec) : 100=0.49%, 250=24.84%, 500=74.68% 00:18:45.600 cpu : usr=0.64%, sys=0.78%, ctx=2724, majf=0, minf=1 00:18:45.600 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.4% 00:18:45.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.600 issued rwts: total=0,2468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.600 job1: (groupid=0, jobs=1): err= 0: pid=91450: Mon Nov 18 15:05:08 2024 00:18:45.600 write: IOPS=723, BW=181MiB/s (190MB/s)(1824MiB/10079msec); 0 zone resets 00:18:45.600 slat (usec): min=18, max=125792, avg=1365.74, stdev=2722.35 00:18:45.600 clat (msec): min=6, max=297, avg=87.02, stdev=16.04 00:18:45.600 lat (msec): min=6, max=297, avg=88.38, stdev=16.05 00:18:45.600 clat percentiles (msec): 00:18:45.600 | 1.00th=[ 80], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 83], 00:18:45.600 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 86], 60.00th=[ 87], 00:18:45.600 | 70.00th=[ 87], 80.00th=[ 88], 90.00th=[ 90], 95.00th=[ 92], 00:18:45.600 | 99.00th=[ 161], 99.50th=[ 220], 99.90th=[ 284], 99.95th=[ 288], 00:18:45.600 | 99.99th=[ 296] 00:18:45.600 bw ( KiB/s): min=100352, max=193536, per=15.93%, avg=185120.25, stdev=20215.18, samples=20 00:18:45.600 iops : min= 392, max= 756, avg=723.10, stdev=78.96, samples=20 00:18:45.600 lat (msec) : 10=0.05%, 50=0.11%, 100=97.49%, 250=1.97%, 500=0.37% 00:18:45.600 cpu : usr=1.27%, sys=2.13%, ctx=7160, majf=0, minf=1 00:18:45.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:45.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.600 issued rwts: total=0,7295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.600 job2: (groupid=0, jobs=1): err= 0: pid=91462: Mon Nov 18 15:05:08 2024 00:18:45.600 write: IOPS=752, BW=188MiB/s (197MB/s)(1894MiB/10073msec); 0 zone resets 00:18:45.600 slat (usec): min=24, max=9970, avg=1313.89, stdev=2199.48 00:18:45.600 clat (msec): min=6, max=156, avg=83.73, stdev= 8.47 00:18:45.600 lat (msec): min=6, max=156, avg=85.04, stdev= 8.34 00:18:45.600 clat percentiles (msec): 00:18:45.600 | 1.00th=[ 47], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 82], 00:18:45.600 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 87], 00:18:45.600 | 70.00th=[ 87], 80.00th=[ 87], 90.00th=[ 88], 95.00th=[ 89], 00:18:45.600 | 99.00th=[ 92], 99.50th=[ 109], 99.90th=[ 146], 99.95th=[ 153], 00:18:45.600 | 99.99th=[ 157] 00:18:45.600 bw ( KiB/s): min=187392, max=233472, per=16.55%, avg=192332.80, stdev=9833.48, samples=20 00:18:45.600 iops : min= 732, max= 912, avg=751.30, stdev=38.41, samples=20 00:18:45.600 lat (msec) : 10=0.04%, 20=0.22%, 50=1.49%, 100=97.69%, 250=0.55% 00:18:45.600 cpu : usr=2.22%, sys=2.41%, ctx=9045, majf=0, minf=1 00:18:45.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:45.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.600 issued rwts: total=0,7576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.600 job3: (groupid=0, jobs=1): err= 0: pid=91463: Mon Nov 18 15:05:08 2024 00:18:45.600 write: IOPS=237, BW=59.4MiB/s (62.3MB/s)(608MiB/10225msec); 0 zone resets 00:18:45.600 slat (usec): min=15, max=48665, avg=4114.64, stdev=7463.49 00:18:45.600 clat (msec): min=17, max=481, avg=265.05, stdev=35.91 00:18:45.600 lat (msec): min=17, max=481, avg=269.17, stdev=35.66 00:18:45.600 clat percentiles (msec): 00:18:45.600 | 1.00th=[ 82], 5.00th=[ 234], 10.00th=[ 245], 20.00th=[ 253], 00:18:45.600 | 30.00th=[ 259], 40.00th=[ 264], 50.00th=[ 271], 60.00th=[ 275], 00:18:45.600 | 70.00th=[ 279], 80.00th=[ 284], 90.00th=[ 288], 95.00th=[ 292], 00:18:45.600 | 99.00th=[ 384], 99.50th=[ 435], 99.90th=[ 464], 99.95th=[ 481], 00:18:45.600 | 99.99th=[ 481] 00:18:45.600 bw ( KiB/s): min=55296, max=66693, per=5.21%, avg=60576.25, stdev=2972.35, samples=20 00:18:45.600 iops : min= 216, max= 260, avg=236.60, stdev=11.55, samples=20 00:18:45.600 lat (msec) : 20=0.08%, 50=0.49%, 100=0.66%, 250=15.51%, 500=83.25% 00:18:45.600 cpu : usr=0.49%, sys=0.58%, ctx=2503, majf=0, minf=1 00:18:45.600 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:18:45.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.600 issued rwts: total=0,2430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.600 job4: (groupid=0, jobs=1): err= 0: pid=91464: Mon Nov 18 15:05:08 2024 00:18:45.600 write: IOPS=222, BW=55.6MiB/s (58.3MB/s)(569MiB/10227msec); 0 zone resets 00:18:45.600 slat (usec): min=21, max=107851, avg=4391.86, stdev=8745.66 00:18:45.600 clat (msec): min=26, max=485, avg=282.96, stdev=35.61 00:18:45.600 lat (msec): min=26, max=485, avg=287.35, stdev=34.94 00:18:45.600 clat percentiles (msec): 00:18:45.600 | 1.00th=[ 167], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 264], 00:18:45.600 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 288], 00:18:45.600 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 317], 95.00th=[ 347], 00:18:45.600 | 99.00th=[ 388], 99.50th=[ 435], 99.90th=[ 468], 99.95th=[ 485], 00:18:45.600 | 99.99th=[ 485] 00:18:45.600 bw ( KiB/s): min=41042, max=61440, per=4.87%, avg=56637.25, stdev=4605.87, samples=20 00:18:45.600 iops : min= 160, max= 240, avg=221.20, stdev=18.03, samples=20 00:18:45.600 lat (msec) : 50=0.35%, 250=8.88%, 500=90.77% 00:18:45.600 cpu : usr=0.48%, sys=0.70%, ctx=2878, majf=0, minf=1 00:18:45.600 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:18:45.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.600 issued rwts: total=0,2276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.600 job5: (groupid=0, jobs=1): err= 0: pid=91465: Mon Nov 18 15:05:08 2024 00:18:45.600 write: IOPS=224, BW=56.2MiB/s (58.9MB/s)(576MiB/10243msec); 0 zone resets 00:18:45.600 slat (usec): min=19, max=86633, avg=4334.11, stdev=8456.88 00:18:45.600 clat (usec): min=526, max=500975, avg=280271.24, stdev=51688.29 00:18:45.600 lat (msec): min=2, max=501, avg=284.61, stdev=51.69 00:18:45.600 clat percentiles (msec): 00:18:45.600 | 1.00th=[ 7], 5.00th=[ 234], 10.00th=[ 251], 20.00th=[ 266], 00:18:45.600 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 292], 00:18:45.600 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 317], 95.00th=[ 321], 00:18:45.600 | 99.00th=[ 405], 99.50th=[ 451], 99.90th=[ 485], 99.95th=[ 502], 00:18:45.600 | 99.99th=[ 502] 00:18:45.600 bw ( KiB/s): min=47104, max=80734, per=4.93%, avg=57340.30, stdev=6706.75, samples=20 00:18:45.600 iops : min= 184, max= 315, avg=223.75, stdev=26.17, samples=20 00:18:45.600 lat (usec) : 750=0.04% 00:18:45.600 lat (msec) : 4=0.43%, 10=0.56%, 20=0.22%, 50=0.35%, 100=0.87% 00:18:45.600 lat (msec) : 250=7.56%, 500=89.88%, 750=0.09% 00:18:45.600 cpu : usr=0.73%, sys=0.58%, ctx=2673, majf=0, minf=1 00:18:45.600 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:18:45.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.600 issued rwts: total=0,2302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.600 job6: (groupid=0, jobs=1): err= 0: pid=91466: Mon Nov 18 15:05:08 2024 00:18:45.600 write: IOPS=221, BW=55.4MiB/s (58.1MB/s)(567MiB/10231msec); 0 zone resets 00:18:45.600 slat (usec): min=19, max=91057, avg=4409.88, stdev=8662.21 00:18:45.600 clat (msec): min=28, max=480, avg=284.14, stdev=40.53 00:18:45.600 lat (msec): min=28, max=480, avg=288.55, stdev=40.15 00:18:45.600 clat percentiles (msec): 00:18:45.600 | 1.00th=[ 80], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 266], 00:18:45.600 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 296], 00:18:45.600 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 334], 00:18:45.600 | 99.00th=[ 384], 99.50th=[ 430], 99.90th=[ 464], 99.95th=[ 481], 00:18:45.600 | 99.99th=[ 481] 00:18:45.600 bw ( KiB/s): min=40960, max=61440, per=4.86%, avg=56427.50, stdev=4714.81, samples=20 00:18:45.600 iops : min= 160, max= 240, avg=220.40, stdev=18.44, samples=20 00:18:45.600 lat (msec) : 50=0.53%, 100=0.71%, 250=6.79%, 500=91.98% 00:18:45.600 cpu : usr=0.46%, sys=0.69%, ctx=2363, majf=0, minf=1 00:18:45.600 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:18:45.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.600 issued rwts: total=0,2268,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.600 job7: (groupid=0, jobs=1): err= 0: pid=91467: Mon Nov 18 15:05:08 2024 00:18:45.600 write: IOPS=738, BW=185MiB/s (194MB/s)(1862MiB/10077msec); 0 zone resets 00:18:45.600 slat (usec): min=19, max=14947, avg=1312.21, stdev=2247.44 00:18:45.600 clat (msec): min=21, max=213, avg=85.26, stdev= 9.84 00:18:45.600 lat (msec): min=21, max=215, avg=86.57, stdev= 9.76 00:18:45.600 clat percentiles (msec): 00:18:45.600 | 1.00th=[ 64], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 82], 00:18:45.600 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 87], 00:18:45.600 | 70.00th=[ 87], 80.00th=[ 88], 90.00th=[ 88], 95.00th=[ 89], 00:18:45.600 | 99.00th=[ 123], 99.50th=[ 155], 99.90th=[ 209], 99.95th=[ 211], 00:18:45.600 | 99.99th=[ 213] 00:18:45.601 bw ( KiB/s): min=165376, max=193536, per=16.27%, avg=189004.80, stdev=5812.30, samples=20 00:18:45.601 iops : min= 646, max= 756, avg=738.30, stdev=22.70, samples=20 00:18:45.601 lat (msec) : 50=0.34%, 100=98.29%, 250=1.37% 00:18:45.601 cpu : usr=1.24%, sys=2.08%, ctx=8837, majf=0, minf=1 00:18:45.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:45.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.601 issued rwts: total=0,7446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.601 job8: (groupid=0, jobs=1): err= 0: pid=91468: Mon Nov 18 15:05:08 2024 00:18:45.601 write: IOPS=246, BW=61.6MiB/s (64.6MB/s)(630MiB/10222msec); 0 zone resets 00:18:45.601 slat (usec): min=23, max=76243, avg=3892.59, stdev=7805.38 00:18:45.601 clat (usec): min=668, max=504480, avg=255563.19, stdev=74730.65 00:18:45.601 lat (usec): min=717, max=504531, avg=259455.78, stdev=75520.30 00:18:45.601 clat percentiles (msec): 00:18:45.601 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 182], 20.00th=[ 249], 00:18:45.601 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 279], 60.00th=[ 284], 00:18:45.601 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 305], 00:18:45.601 | 99.00th=[ 384], 99.50th=[ 456], 99.90th=[ 489], 99.95th=[ 506], 00:18:45.601 | 99.99th=[ 506] 00:18:45.601 bw ( KiB/s): min=51200, max=151855, per=5.41%, avg=62914.35, stdev=21094.55, samples=20 00:18:45.601 iops : min= 200, max= 593, avg=245.75, stdev=82.36, samples=20 00:18:45.601 lat (usec) : 750=0.04%, 1000=0.12% 00:18:45.601 lat (msec) : 2=0.40%, 4=0.79%, 10=2.82%, 20=1.55%, 50=0.32% 00:18:45.601 lat (msec) : 100=1.27%, 250=12.82%, 500=79.80%, 750=0.08% 00:18:45.601 cpu : usr=0.50%, sys=0.86%, ctx=2702, majf=0, minf=1 00:18:45.601 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:18:45.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.601 issued rwts: total=0,2520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.601 job9: (groupid=0, jobs=1): err= 0: pid=91469: Mon Nov 18 15:05:08 2024 00:18:45.601 write: IOPS=724, BW=181MiB/s (190MB/s)(1826MiB/10075msec); 0 zone resets 00:18:45.601 slat (usec): min=20, max=72776, avg=1363.90, stdev=2484.59 00:18:45.601 clat (msec): min=72, max=270, avg=86.89, stdev=13.21 00:18:45.601 lat (msec): min=76, max=270, avg=88.26, stdev=13.18 00:18:45.601 clat percentiles (msec): 00:18:45.601 | 1.00th=[ 80], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 83], 00:18:45.601 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 86], 60.00th=[ 87], 00:18:45.601 | 70.00th=[ 87], 80.00th=[ 88], 90.00th=[ 89], 95.00th=[ 92], 00:18:45.601 | 99.00th=[ 159], 99.50th=[ 184], 99.90th=[ 247], 99.95th=[ 259], 00:18:45.601 | 99.99th=[ 271] 00:18:45.601 bw ( KiB/s): min=104448, max=193536, per=15.95%, avg=185344.00, stdev=19280.76, samples=20 00:18:45.601 iops : min= 408, max= 756, avg=724.00, stdev=75.32, samples=20 00:18:45.601 lat (msec) : 100=97.56%, 250=2.34%, 500=0.10% 00:18:45.601 cpu : usr=1.29%, sys=2.13%, ctx=8754, majf=0, minf=1 00:18:45.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:45.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.601 issued rwts: total=0,7303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.601 job10: (groupid=0, jobs=1): err= 0: pid=91470: Mon Nov 18 15:05:08 2024 00:18:45.601 write: IOPS=255, BW=63.8MiB/s (66.9MB/s)(653MiB/10226msec); 0 zone resets 00:18:45.601 slat (usec): min=18, max=39185, avg=3744.39, stdev=6719.39 00:18:45.601 clat (msec): min=6, max=487, avg=246.76, stdev=42.80 00:18:45.601 lat (msec): min=6, max=487, avg=250.51, stdev=43.05 00:18:45.601 clat percentiles (msec): 00:18:45.601 | 1.00th=[ 52], 5.00th=[ 188], 10.00th=[ 230], 20.00th=[ 241], 00:18:45.601 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 257], 00:18:45.601 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 271], 00:18:45.601 | 99.00th=[ 372], 99.50th=[ 439], 99.90th=[ 472], 99.95th=[ 489], 00:18:45.601 | 99.99th=[ 489] 00:18:45.601 bw ( KiB/s): min=59904, max=92998, per=5.61%, avg=65187.55, stdev=6983.42, samples=20 00:18:45.601 iops : min= 234, max= 363, avg=254.60, stdev=27.23, samples=20 00:18:45.601 lat (msec) : 10=0.50%, 50=0.46%, 100=1.65%, 250=42.80%, 500=54.60% 00:18:45.601 cpu : usr=0.56%, sys=0.76%, ctx=2989, majf=0, minf=1 00:18:45.601 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:18:45.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.601 issued rwts: total=0,2610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.601 00:18:45.601 Run status group 0 (all jobs): 00:18:45.601 WRITE: bw=1135MiB/s (1190MB/s), 55.4MiB/s-188MiB/s (58.1MB/s-197MB/s), io=11.4GiB (12.2GB), run=10073-10243msec 00:18:45.601 00:18:45.601 Disk stats (read/write): 00:18:45.601 nvme0n1: ios=49/4806, merge=0/0, ticks=24/1204719, in_queue=1204743, util=97.56% 00:18:45.601 nvme10n1: ios=49/14453, merge=0/0, ticks=49/1215961, in_queue=1216010, util=97.95% 00:18:45.601 nvme1n1: ios=31/15005, merge=0/0, ticks=23/1215140, in_queue=1215163, util=97.89% 00:18:45.601 nvme2n1: ios=20/4724, merge=0/0, ticks=27/1204199, in_queue=1204226, util=97.91% 00:18:45.601 nvme3n1: ios=0/4422, merge=0/0, ticks=0/1203429, in_queue=1203429, util=97.97% 00:18:45.601 nvme4n1: ios=0/4482, merge=0/0, ticks=0/1205822, in_queue=1205822, util=98.34% 00:18:45.601 nvme5n1: ios=0/4404, merge=0/0, ticks=0/1204133, in_queue=1204133, util=98.34% 00:18:45.601 nvme6n1: ios=0/14748, merge=0/0, ticks=0/1216674, in_queue=1216674, util=98.35% 00:18:45.601 nvme7n1: ios=0/4912, merge=0/0, ticks=0/1204801, in_queue=1204801, util=98.65% 00:18:45.601 nvme8n1: ios=0/14464, merge=0/0, ticks=0/1215823, in_queue=1215823, util=98.77% 00:18:45.601 nvme9n1: ios=0/5097, merge=0/0, ticks=0/1208131, in_queue=1208131, util=98.98% 00:18:45.601 15:05:08 -- target/multiconnection.sh@36 -- # sync 00:18:45.601 15:05:08 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:45.601 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.601 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:45.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.601 15:05:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:45.601 15:05:08 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.601 15:05:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:18:45.601 15:05:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.601 15:05:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.601 15:05:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:18:45.601 15:05:08 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.601 15:05:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.601 15:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.601 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.601 15:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.601 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.601 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:45.601 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:45.601 15:05:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:45.601 15:05:08 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.601 15:05:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.601 15:05:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:18:45.601 15:05:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.601 15:05:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:18:45.601 15:05:08 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.601 15:05:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:45.601 15:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.601 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.601 15:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.601 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.601 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:45.601 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:45.601 15:05:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:45.601 15:05:08 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.601 15:05:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.601 15:05:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:18:45.601 15:05:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.601 15:05:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:18:45.601 15:05:08 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.601 15:05:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:45.601 15:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.601 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.601 15:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.601 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.601 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:45.601 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:45.601 15:05:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:45.601 15:05:08 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.601 15:05:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:18:45.601 15:05:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.601 15:05:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.601 15:05:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:18:45.601 15:05:08 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.601 15:05:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:45.601 15:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.601 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.601 15:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.601 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.601 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:45.602 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:45.602 15:05:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:45.602 15:05:08 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.602 15:05:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.602 15:05:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:18:45.602 15:05:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:18:45.602 15:05:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.602 15:05:08 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.602 15:05:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:45.602 15:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.602 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.602 15:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.602 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.602 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:45.602 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:45.602 15:05:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:45.602 15:05:08 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.602 15:05:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.602 15:05:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:18:45.602 15:05:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.602 15:05:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:18:45.602 15:05:08 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.602 15:05:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:45.602 15:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.602 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.602 15:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.602 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.602 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:45.602 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:45.602 15:05:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:45.602 15:05:08 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.602 15:05:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.602 15:05:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:18:45.602 15:05:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.602 15:05:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:18:45.602 15:05:08 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.602 15:05:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:45.602 15:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.602 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.602 15:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.602 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.602 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:45.602 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:45.602 15:05:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:45.602 15:05:08 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.602 15:05:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.602 15:05:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:18:45.602 15:05:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.602 15:05:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:18:45.602 15:05:08 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.602 15:05:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:45.602 15:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.602 15:05:08 -- common/autotest_common.sh@10 -- # set +x 00:18:45.602 15:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.602 15:05:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.602 15:05:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:45.602 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:45.602 15:05:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:45.602 15:05:09 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.602 15:05:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.602 15:05:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:18:45.602 15:05:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.602 15:05:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:18:45.602 15:05:09 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.602 15:05:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:45.602 15:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.602 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:18:45.602 15:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.602 15:05:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.602 15:05:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:45.602 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:45.602 15:05:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:45.602 15:05:09 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.602 15:05:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.602 15:05:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:18:45.602 15:05:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.602 15:05:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:18:45.602 15:05:09 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.602 15:05:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:45.602 15:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.602 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:18:45.602 15:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.602 15:05:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.602 15:05:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:45.602 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:45.602 15:05:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:45.602 15:05:09 -- common/autotest_common.sh@1208 -- # local i=0 00:18:45.861 15:05:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:45.861 15:05:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:18:45.861 15:05:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:45.861 15:05:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:18:45.861 15:05:09 -- common/autotest_common.sh@1220 -- # return 0 00:18:45.861 15:05:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:45.861 15:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.861 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:18:45.861 15:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.861 15:05:09 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:45.861 15:05:09 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:45.861 15:05:09 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:45.861 15:05:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:45.861 15:05:09 -- nvmf/common.sh@116 -- # sync 00:18:45.861 15:05:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:45.861 15:05:09 -- nvmf/common.sh@119 -- # set +e 00:18:45.861 15:05:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:45.861 15:05:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:45.861 rmmod nvme_tcp 00:18:45.861 rmmod nvme_fabrics 00:18:45.861 rmmod nvme_keyring 00:18:45.861 15:05:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:45.861 15:05:09 -- nvmf/common.sh@123 -- # set -e 00:18:45.861 15:05:09 -- nvmf/common.sh@124 -- # return 0 00:18:45.861 15:05:09 -- nvmf/common.sh@477 -- # '[' -n 90766 ']' 00:18:45.861 15:05:09 -- nvmf/common.sh@478 -- # killprocess 90766 00:18:45.861 15:05:09 -- common/autotest_common.sh@936 -- # '[' -z 90766 ']' 00:18:45.861 15:05:09 -- common/autotest_common.sh@940 -- # kill -0 90766 00:18:45.861 15:05:09 -- common/autotest_common.sh@941 -- # uname 00:18:45.861 15:05:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:45.861 15:05:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90766 00:18:45.861 15:05:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:45.861 15:05:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:45.861 15:05:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90766' 00:18:45.861 killing process with pid 90766 00:18:45.861 15:05:09 -- common/autotest_common.sh@955 -- # kill 90766 00:18:45.861 15:05:09 -- common/autotest_common.sh@960 -- # wait 90766 00:18:46.429 15:05:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:46.429 15:05:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:46.429 15:05:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:46.429 15:05:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.429 15:05:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:46.429 15:05:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.429 15:05:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.429 15:05:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.429 15:05:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:46.429 ************************************ 00:18:46.429 END TEST nvmf_multiconnection 00:18:46.429 ************************************ 00:18:46.429 00:18:46.429 real 0m49.901s 00:18:46.429 user 2m53.656s 00:18:46.429 sys 0m20.786s 00:18:46.429 15:05:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:46.429 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:18:46.429 15:05:09 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:46.429 15:05:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:46.429 15:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.429 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:18:46.429 ************************************ 00:18:46.429 START TEST nvmf_initiator_timeout 00:18:46.429 ************************************ 00:18:46.429 15:05:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:46.429 * Looking for test storage... 00:18:46.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:46.429 15:05:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:46.429 15:05:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:46.429 15:05:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:46.688 15:05:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:46.688 15:05:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:46.688 15:05:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:46.688 15:05:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:46.688 15:05:10 -- scripts/common.sh@335 -- # IFS=.-: 00:18:46.688 15:05:10 -- scripts/common.sh@335 -- # read -ra ver1 00:18:46.688 15:05:10 -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.688 15:05:10 -- scripts/common.sh@336 -- # read -ra ver2 00:18:46.688 15:05:10 -- scripts/common.sh@337 -- # local 'op=<' 00:18:46.688 15:05:10 -- scripts/common.sh@339 -- # ver1_l=2 00:18:46.688 15:05:10 -- scripts/common.sh@340 -- # ver2_l=1 00:18:46.688 15:05:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:46.688 15:05:10 -- scripts/common.sh@343 -- # case "$op" in 00:18:46.688 15:05:10 -- scripts/common.sh@344 -- # : 1 00:18:46.688 15:05:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:46.688 15:05:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.688 15:05:10 -- scripts/common.sh@364 -- # decimal 1 00:18:46.688 15:05:10 -- scripts/common.sh@352 -- # local d=1 00:18:46.688 15:05:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.688 15:05:10 -- scripts/common.sh@354 -- # echo 1 00:18:46.688 15:05:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:46.688 15:05:10 -- scripts/common.sh@365 -- # decimal 2 00:18:46.688 15:05:10 -- scripts/common.sh@352 -- # local d=2 00:18:46.688 15:05:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.688 15:05:10 -- scripts/common.sh@354 -- # echo 2 00:18:46.688 15:05:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:46.688 15:05:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:46.688 15:05:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:46.688 15:05:10 -- scripts/common.sh@367 -- # return 0 00:18:46.688 15:05:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.688 15:05:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:46.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.688 --rc genhtml_branch_coverage=1 00:18:46.688 --rc genhtml_function_coverage=1 00:18:46.688 --rc genhtml_legend=1 00:18:46.688 --rc geninfo_all_blocks=1 00:18:46.688 --rc geninfo_unexecuted_blocks=1 00:18:46.688 00:18:46.688 ' 00:18:46.688 15:05:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:46.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.688 --rc genhtml_branch_coverage=1 00:18:46.688 --rc genhtml_function_coverage=1 00:18:46.688 --rc genhtml_legend=1 00:18:46.688 --rc geninfo_all_blocks=1 00:18:46.688 --rc geninfo_unexecuted_blocks=1 00:18:46.688 00:18:46.688 ' 00:18:46.688 15:05:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:46.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.688 --rc genhtml_branch_coverage=1 00:18:46.688 --rc genhtml_function_coverage=1 00:18:46.688 --rc genhtml_legend=1 00:18:46.688 --rc geninfo_all_blocks=1 00:18:46.688 --rc geninfo_unexecuted_blocks=1 00:18:46.688 00:18:46.688 ' 00:18:46.688 15:05:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:46.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.688 --rc genhtml_branch_coverage=1 00:18:46.688 --rc genhtml_function_coverage=1 00:18:46.688 --rc genhtml_legend=1 00:18:46.688 --rc geninfo_all_blocks=1 00:18:46.688 --rc geninfo_unexecuted_blocks=1 00:18:46.688 00:18:46.688 ' 00:18:46.688 15:05:10 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:46.688 15:05:10 -- nvmf/common.sh@7 -- # uname -s 00:18:46.688 15:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.688 15:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.688 15:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.688 15:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.688 15:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.688 15:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.688 15:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.688 15:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.688 15:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.688 15:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.688 15:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:18:46.689 15:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:18:46.689 15:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.689 15:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.689 15:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:46.689 15:05:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:46.689 15:05:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.689 15:05:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.689 15:05:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.689 15:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.689 15:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.689 15:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.689 15:05:10 -- paths/export.sh@5 -- # export PATH 00:18:46.689 15:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.689 15:05:10 -- nvmf/common.sh@46 -- # : 0 00:18:46.689 15:05:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:46.689 15:05:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:46.689 15:05:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:46.689 15:05:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.689 15:05:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.689 15:05:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:46.689 15:05:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:46.689 15:05:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:46.689 15:05:10 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.689 15:05:10 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.689 15:05:10 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:46.689 15:05:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:46.689 15:05:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.689 15:05:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:46.689 15:05:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:46.689 15:05:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:46.689 15:05:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.689 15:05:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.689 15:05:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.689 15:05:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:46.689 15:05:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:46.689 15:05:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:46.689 15:05:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:46.689 15:05:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:46.689 15:05:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:46.689 15:05:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.689 15:05:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.689 15:05:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:46.689 15:05:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:46.689 15:05:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:46.689 15:05:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:46.689 15:05:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:46.689 15:05:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.689 15:05:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:46.689 15:05:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:46.689 15:05:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:46.689 15:05:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:46.689 15:05:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:46.689 15:05:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:46.689 Cannot find device "nvmf_tgt_br" 00:18:46.689 15:05:10 -- nvmf/common.sh@154 -- # true 00:18:46.689 15:05:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.689 Cannot find device "nvmf_tgt_br2" 00:18:46.689 15:05:10 -- nvmf/common.sh@155 -- # true 00:18:46.689 15:05:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:46.689 15:05:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:46.689 Cannot find device "nvmf_tgt_br" 00:18:46.689 15:05:10 -- nvmf/common.sh@157 -- # true 00:18:46.689 15:05:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:46.689 Cannot find device "nvmf_tgt_br2" 00:18:46.689 15:05:10 -- nvmf/common.sh@158 -- # true 00:18:46.689 15:05:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:46.689 15:05:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:46.689 15:05:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.689 15:05:10 -- nvmf/common.sh@161 -- # true 00:18:46.689 15:05:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.689 15:05:10 -- nvmf/common.sh@162 -- # true 00:18:46.689 15:05:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:46.689 15:05:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:46.689 15:05:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:46.689 15:05:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:46.689 15:05:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:46.689 15:05:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:46.689 15:05:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:46.949 15:05:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:46.949 15:05:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:46.949 15:05:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:46.949 15:05:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:46.949 15:05:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:46.949 15:05:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:46.949 15:05:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:46.949 15:05:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:46.949 15:05:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:46.949 15:05:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:46.949 15:05:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:46.949 15:05:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:46.949 15:05:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:46.949 15:05:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:46.949 15:05:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:46.949 15:05:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:46.949 15:05:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:46.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:46.949 00:18:46.949 --- 10.0.0.2 ping statistics --- 00:18:46.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.949 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:46.949 15:05:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:46.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:46.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:46.949 00:18:46.949 --- 10.0.0.3 ping statistics --- 00:18:46.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.949 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:46.949 15:05:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:46.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:46.949 00:18:46.949 --- 10.0.0.1 ping statistics --- 00:18:46.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.949 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:46.949 15:05:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.949 15:05:10 -- nvmf/common.sh@421 -- # return 0 00:18:46.949 15:05:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:46.949 15:05:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.949 15:05:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:46.949 15:05:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:46.949 15:05:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.949 15:05:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:46.949 15:05:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:46.949 15:05:10 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:46.949 15:05:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:46.949 15:05:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:46.949 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:18:46.949 15:05:10 -- nvmf/common.sh@469 -- # nvmfpid=91848 00:18:46.949 15:05:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:46.949 15:05:10 -- nvmf/common.sh@470 -- # waitforlisten 91848 00:18:46.949 15:05:10 -- common/autotest_common.sh@829 -- # '[' -z 91848 ']' 00:18:46.949 15:05:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.949 15:05:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.949 15:05:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.949 15:05:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.949 15:05:10 -- common/autotest_common.sh@10 -- # set +x 00:18:46.949 [2024-11-18 15:05:10.484794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:46.949 [2024-11-18 15:05:10.484882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.208 [2024-11-18 15:05:10.629785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.208 [2024-11-18 15:05:10.700984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:47.208 [2024-11-18 15:05:10.701164] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.208 [2024-11-18 15:05:10.701183] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.208 [2024-11-18 15:05:10.701195] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.208 [2024-11-18 15:05:10.701369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.208 [2024-11-18 15:05:10.701450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.208 [2024-11-18 15:05:10.701608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.208 [2024-11-18 15:05:10.701618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.146 15:05:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.146 15:05:11 -- common/autotest_common.sh@862 -- # return 0 00:18:48.146 15:05:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:48.146 15:05:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.146 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.146 15:05:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.146 15:05:11 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:48.146 15:05:11 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.146 15:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.146 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.146 Malloc0 00:18:48.146 15:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.146 15:05:11 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:48.146 15:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.146 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.146 Delay0 00:18:48.146 15:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.146 15:05:11 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.146 15:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.146 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.146 [2024-11-18 15:05:11.610748] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.146 15:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.146 15:05:11 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.146 15:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.146 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.146 15:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.146 15:05:11 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:48.146 15:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.146 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.146 15:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.146 15:05:11 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.146 15:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.146 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:48.146 [2024-11-18 15:05:11.638944] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.146 15:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.146 15:05:11 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.405 15:05:11 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:48.405 15:05:11 -- common/autotest_common.sh@1187 -- # local i=0 00:18:48.405 15:05:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.405 15:05:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:48.405 15:05:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:50.310 15:05:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:50.310 15:05:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:50.310 15:05:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.310 15:05:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:50.310 15:05:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.310 15:05:13 -- common/autotest_common.sh@1197 -- # return 0 00:18:50.310 15:05:13 -- target/initiator_timeout.sh@35 -- # fio_pid=91930 00:18:50.311 15:05:13 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:50.311 15:05:13 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:50.311 [global] 00:18:50.311 thread=1 00:18:50.311 invalidate=1 00:18:50.311 rw=write 00:18:50.311 time_based=1 00:18:50.311 runtime=60 00:18:50.311 ioengine=libaio 00:18:50.311 direct=1 00:18:50.311 bs=4096 00:18:50.311 iodepth=1 00:18:50.311 norandommap=0 00:18:50.311 numjobs=1 00:18:50.311 00:18:50.311 verify_dump=1 00:18:50.311 verify_backlog=512 00:18:50.311 verify_state_save=0 00:18:50.311 do_verify=1 00:18:50.311 verify=crc32c-intel 00:18:50.311 [job0] 00:18:50.311 filename=/dev/nvme0n1 00:18:50.311 Could not set queue depth (nvme0n1) 00:18:50.569 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.569 fio-3.35 00:18:50.569 Starting 1 thread 00:18:53.858 15:05:16 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:53.858 15:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.858 15:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:53.858 true 00:18:53.858 15:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.858 15:05:16 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:53.858 15:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.858 15:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:53.858 true 00:18:53.858 15:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.858 15:05:16 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:53.858 15:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.858 15:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:53.858 true 00:18:53.858 15:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.858 15:05:16 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:53.858 15:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.858 15:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:53.858 true 00:18:53.858 15:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.858 15:05:16 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:56.392 15:05:19 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:56.392 15:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.392 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:56.392 true 00:18:56.392 15:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.392 15:05:19 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:56.392 15:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.392 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:56.392 true 00:18:56.392 15:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.392 15:05:19 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:56.392 15:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.392 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:56.392 true 00:18:56.392 15:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.392 15:05:19 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:56.392 15:05:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.392 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:56.392 true 00:18:56.392 15:05:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.392 15:05:19 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:56.392 15:05:19 -- target/initiator_timeout.sh@54 -- # wait 91930 00:19:52.625 00:19:52.625 job0: (groupid=0, jobs=1): err= 0: pid=91951: Mon Nov 18 15:06:14 2024 00:19:52.625 read: IOPS=834, BW=3337KiB/s (3417kB/s)(196MiB/60000msec) 00:19:52.625 slat (usec): min=10, max=14226, avg=13.43, stdev=72.41 00:19:52.625 clat (usec): min=149, max=40787k, avg=1008.12, stdev=182314.95 00:19:52.625 lat (usec): min=161, max=40787k, avg=1021.55, stdev=182314.97 00:19:52.625 clat percentiles (usec): 00:19:52.625 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:19:52.625 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:19:52.625 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 229], 00:19:52.625 | 99.00th=[ 258], 99.50th=[ 289], 99.90th=[ 510], 99.95th=[ 660], 00:19:52.625 | 99.99th=[ 1139] 00:19:52.625 write: IOPS=836, BW=3345KiB/s (3425kB/s)(196MiB/60000msec); 0 zone resets 00:19:52.625 slat (usec): min=13, max=723, avg=20.30, stdev= 7.19 00:19:52.625 clat (usec): min=114, max=2231, avg=153.33, stdev=28.27 00:19:52.625 lat (usec): min=132, max=2254, avg=173.64, stdev=29.82 00:19:52.625 clat percentiles (usec): 00:19:52.625 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:19:52.625 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:19:52.625 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 188], 00:19:52.625 | 99.00th=[ 217], 99.50th=[ 241], 99.90th=[ 474], 99.95th=[ 652], 00:19:52.625 | 99.99th=[ 996] 00:19:52.625 bw ( KiB/s): min= 6104, max=12288, per=100.00%, avg=10347.79, stdev=1295.30, samples=38 00:19:52.625 iops : min= 1526, max= 3072, avg=2586.95, stdev=323.82, samples=38 00:19:52.625 lat (usec) : 250=99.10%, 500=0.80%, 750=0.07%, 1000=0.02% 00:19:52.625 lat (msec) : 2=0.01%, 4=0.01%, 50=0.01%, >=2000=0.01% 00:19:52.625 cpu : usr=0.52%, sys=2.10%, ctx=100244, majf=0, minf=5 00:19:52.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.626 issued rwts: total=50050,50176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:52.626 00:19:52.626 Run status group 0 (all jobs): 00:19:52.626 READ: bw=3337KiB/s (3417kB/s), 3337KiB/s-3337KiB/s (3417kB/s-3417kB/s), io=196MiB (205MB), run=60000-60000msec 00:19:52.626 WRITE: bw=3345KiB/s (3425kB/s), 3345KiB/s-3345KiB/s (3425kB/s-3425kB/s), io=196MiB (206MB), run=60000-60000msec 00:19:52.626 00:19:52.626 Disk stats (read/write): 00:19:52.626 nvme0n1: ios=49974/50092, merge=0/0, ticks=10163/8160, in_queue=18323, util=99.74% 00:19:52.626 15:06:14 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:52.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:52.626 15:06:14 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:52.626 15:06:14 -- common/autotest_common.sh@1208 -- # local i=0 00:19:52.626 15:06:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:52.626 15:06:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.626 15:06:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:52.626 15:06:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.626 15:06:14 -- common/autotest_common.sh@1220 -- # return 0 00:19:52.626 15:06:14 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:52.626 nvmf hotplug test: fio successful as expected 00:19:52.626 15:06:14 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:52.626 15:06:14 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.626 15:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.626 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.626 15:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.626 15:06:14 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:52.626 15:06:14 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:52.626 15:06:14 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:52.626 15:06:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.626 15:06:14 -- nvmf/common.sh@116 -- # sync 00:19:52.626 15:06:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:52.626 15:06:14 -- nvmf/common.sh@119 -- # set +e 00:19:52.626 15:06:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:52.626 15:06:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:52.626 rmmod nvme_tcp 00:19:52.626 rmmod nvme_fabrics 00:19:52.626 rmmod nvme_keyring 00:19:52.626 15:06:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:52.626 15:06:14 -- nvmf/common.sh@123 -- # set -e 00:19:52.626 15:06:14 -- nvmf/common.sh@124 -- # return 0 00:19:52.626 15:06:14 -- nvmf/common.sh@477 -- # '[' -n 91848 ']' 00:19:52.626 15:06:14 -- nvmf/common.sh@478 -- # killprocess 91848 00:19:52.626 15:06:14 -- common/autotest_common.sh@936 -- # '[' -z 91848 ']' 00:19:52.626 15:06:14 -- common/autotest_common.sh@940 -- # kill -0 91848 00:19:52.626 15:06:14 -- common/autotest_common.sh@941 -- # uname 00:19:52.626 15:06:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.626 15:06:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91848 00:19:52.626 15:06:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:52.626 15:06:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:52.626 15:06:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91848' 00:19:52.626 killing process with pid 91848 00:19:52.626 15:06:14 -- common/autotest_common.sh@955 -- # kill 91848 00:19:52.626 15:06:14 -- common/autotest_common.sh@960 -- # wait 91848 00:19:52.626 15:06:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:52.626 15:06:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:52.626 15:06:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:52.626 15:06:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.626 15:06:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:52.626 15:06:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.626 15:06:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.626 15:06:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.626 15:06:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:52.626 00:19:52.626 real 1m4.699s 00:19:52.626 user 4m7.153s 00:19:52.626 sys 0m8.181s 00:19:52.626 15:06:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:52.626 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.626 ************************************ 00:19:52.626 END TEST nvmf_initiator_timeout 00:19:52.626 ************************************ 00:19:52.626 15:06:14 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:52.626 15:06:14 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:52.626 15:06:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.626 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.626 15:06:14 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:52.626 15:06:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.626 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.626 15:06:14 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:52.626 15:06:14 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:52.626 15:06:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:52.626 15:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:52.626 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.626 ************************************ 00:19:52.626 START TEST nvmf_multicontroller 00:19:52.626 ************************************ 00:19:52.626 15:06:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:52.626 * Looking for test storage... 00:19:52.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:52.626 15:06:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:52.626 15:06:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:52.626 15:06:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:52.626 15:06:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:52.626 15:06:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:52.626 15:06:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:52.626 15:06:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:52.626 15:06:14 -- scripts/common.sh@335 -- # IFS=.-: 00:19:52.626 15:06:14 -- scripts/common.sh@335 -- # read -ra ver1 00:19:52.626 15:06:14 -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.626 15:06:14 -- scripts/common.sh@336 -- # read -ra ver2 00:19:52.626 15:06:14 -- scripts/common.sh@337 -- # local 'op=<' 00:19:52.626 15:06:14 -- scripts/common.sh@339 -- # ver1_l=2 00:19:52.626 15:06:14 -- scripts/common.sh@340 -- # ver2_l=1 00:19:52.626 15:06:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:52.626 15:06:14 -- scripts/common.sh@343 -- # case "$op" in 00:19:52.626 15:06:14 -- scripts/common.sh@344 -- # : 1 00:19:52.626 15:06:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:52.626 15:06:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.626 15:06:14 -- scripts/common.sh@364 -- # decimal 1 00:19:52.626 15:06:14 -- scripts/common.sh@352 -- # local d=1 00:19:52.626 15:06:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.626 15:06:14 -- scripts/common.sh@354 -- # echo 1 00:19:52.626 15:06:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:52.626 15:06:14 -- scripts/common.sh@365 -- # decimal 2 00:19:52.626 15:06:14 -- scripts/common.sh@352 -- # local d=2 00:19:52.626 15:06:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.626 15:06:14 -- scripts/common.sh@354 -- # echo 2 00:19:52.626 15:06:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:52.626 15:06:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:52.626 15:06:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:52.627 15:06:14 -- scripts/common.sh@367 -- # return 0 00:19:52.627 15:06:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.627 15:06:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.627 --rc genhtml_branch_coverage=1 00:19:52.627 --rc genhtml_function_coverage=1 00:19:52.627 --rc genhtml_legend=1 00:19:52.627 --rc geninfo_all_blocks=1 00:19:52.627 --rc geninfo_unexecuted_blocks=1 00:19:52.627 00:19:52.627 ' 00:19:52.627 15:06:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.627 --rc genhtml_branch_coverage=1 00:19:52.627 --rc genhtml_function_coverage=1 00:19:52.627 --rc genhtml_legend=1 00:19:52.627 --rc geninfo_all_blocks=1 00:19:52.627 --rc geninfo_unexecuted_blocks=1 00:19:52.627 00:19:52.627 ' 00:19:52.627 15:06:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.627 --rc genhtml_branch_coverage=1 00:19:52.627 --rc genhtml_function_coverage=1 00:19:52.627 --rc genhtml_legend=1 00:19:52.627 --rc geninfo_all_blocks=1 00:19:52.627 --rc geninfo_unexecuted_blocks=1 00:19:52.627 00:19:52.627 ' 00:19:52.627 15:06:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.627 --rc genhtml_branch_coverage=1 00:19:52.627 --rc genhtml_function_coverage=1 00:19:52.627 --rc genhtml_legend=1 00:19:52.627 --rc geninfo_all_blocks=1 00:19:52.627 --rc geninfo_unexecuted_blocks=1 00:19:52.627 00:19:52.627 ' 00:19:52.627 15:06:14 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.627 15:06:14 -- nvmf/common.sh@7 -- # uname -s 00:19:52.627 15:06:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.627 15:06:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.627 15:06:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.627 15:06:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.627 15:06:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.627 15:06:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.627 15:06:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.627 15:06:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.627 15:06:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.627 15:06:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.627 15:06:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:19:52.627 15:06:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:19:52.627 15:06:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.627 15:06:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.627 15:06:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:52.627 15:06:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.627 15:06:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.627 15:06:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.627 15:06:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.627 15:06:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.627 15:06:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.627 15:06:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.627 15:06:14 -- paths/export.sh@5 -- # export PATH 00:19:52.627 15:06:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.627 15:06:14 -- nvmf/common.sh@46 -- # : 0 00:19:52.627 15:06:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:52.627 15:06:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:52.627 15:06:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:52.627 15:06:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.627 15:06:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.627 15:06:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:52.627 15:06:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:52.627 15:06:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:52.627 15:06:14 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.627 15:06:14 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.627 15:06:14 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:52.627 15:06:14 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:52.627 15:06:14 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.627 15:06:14 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:52.627 15:06:14 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:52.627 15:06:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:52.627 15:06:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.627 15:06:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:52.627 15:06:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:52.627 15:06:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:52.627 15:06:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.627 15:06:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.627 15:06:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.627 15:06:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:52.627 15:06:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:52.627 15:06:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:52.627 15:06:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:52.627 15:06:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:52.627 15:06:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:52.627 15:06:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.627 15:06:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.627 15:06:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:52.627 15:06:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:52.627 15:06:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:52.627 15:06:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:52.627 15:06:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:52.627 15:06:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.627 15:06:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:52.627 15:06:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:52.627 15:06:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:52.627 15:06:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:52.627 15:06:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:52.627 15:06:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:52.627 Cannot find device "nvmf_tgt_br" 00:19:52.627 15:06:14 -- nvmf/common.sh@154 -- # true 00:19:52.627 15:06:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.627 Cannot find device "nvmf_tgt_br2" 00:19:52.627 15:06:14 -- nvmf/common.sh@155 -- # true 00:19:52.627 15:06:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:52.627 15:06:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:52.627 Cannot find device "nvmf_tgt_br" 00:19:52.627 15:06:14 -- nvmf/common.sh@157 -- # true 00:19:52.627 15:06:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:52.627 Cannot find device "nvmf_tgt_br2" 00:19:52.627 15:06:14 -- nvmf/common.sh@158 -- # true 00:19:52.627 15:06:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:52.627 15:06:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:52.627 15:06:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.627 15:06:14 -- nvmf/common.sh@161 -- # true 00:19:52.627 15:06:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.627 15:06:15 -- nvmf/common.sh@162 -- # true 00:19:52.627 15:06:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.628 15:06:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.628 15:06:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.628 15:06:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.628 15:06:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.628 15:06:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.628 15:06:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.628 15:06:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:52.628 15:06:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:52.628 15:06:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:52.628 15:06:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:52.628 15:06:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:52.628 15:06:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:52.628 15:06:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.628 15:06:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.628 15:06:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.628 15:06:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:52.628 15:06:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:52.628 15:06:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.628 15:06:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.628 15:06:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.628 15:06:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.628 15:06:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.628 15:06:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:52.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:52.628 00:19:52.628 --- 10.0.0.2 ping statistics --- 00:19:52.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.628 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:52.628 15:06:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:52.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:52.628 00:19:52.628 --- 10.0.0.3 ping statistics --- 00:19:52.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.628 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:52.628 15:06:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:52.628 00:19:52.628 --- 10.0.0.1 ping statistics --- 00:19:52.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.628 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:52.628 15:06:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.628 15:06:15 -- nvmf/common.sh@421 -- # return 0 00:19:52.628 15:06:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:52.628 15:06:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.628 15:06:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:52.628 15:06:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:52.628 15:06:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.628 15:06:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:52.628 15:06:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:52.628 15:06:15 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:52.628 15:06:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:52.628 15:06:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.628 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:19:52.628 15:06:15 -- nvmf/common.sh@469 -- # nvmfpid=92796 00:19:52.628 15:06:15 -- nvmf/common.sh@470 -- # waitforlisten 92796 00:19:52.628 15:06:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:52.628 15:06:15 -- common/autotest_common.sh@829 -- # '[' -z 92796 ']' 00:19:52.628 15:06:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.628 15:06:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.628 15:06:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.628 15:06:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.628 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:19:52.628 [2024-11-18 15:06:15.241231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:52.628 [2024-11-18 15:06:15.241337] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.628 [2024-11-18 15:06:15.378610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:52.628 [2024-11-18 15:06:15.448496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:52.628 [2024-11-18 15:06:15.448646] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.628 [2024-11-18 15:06:15.448658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.628 [2024-11-18 15:06:15.448667] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.628 [2024-11-18 15:06:15.449084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.628 [2024-11-18 15:06:15.449190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.628 [2024-11-18 15:06:15.449201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.888 15:06:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.888 15:06:16 -- common/autotest_common.sh@862 -- # return 0 00:19:52.888 15:06:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:52.888 15:06:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 15:06:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.888 15:06:16 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 [2024-11-18 15:06:16.303791] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 Malloc0 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 [2024-11-18 15:06:16.388776] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 [2024-11-18 15:06:16.396630] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 Malloc1 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:52.888 15:06:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.888 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.888 15:06:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.888 15:06:16 -- host/multicontroller.sh@44 -- # bdevperf_pid=92854 00:19:52.888 15:06:16 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:52.888 15:06:16 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.888 15:06:16 -- host/multicontroller.sh@47 -- # waitforlisten 92854 /var/tmp/bdevperf.sock 00:19:52.888 15:06:16 -- common/autotest_common.sh@829 -- # '[' -z 92854 ']' 00:19:52.888 15:06:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.888 15:06:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.888 15:06:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.889 15:06:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.889 15:06:16 -- common/autotest_common.sh@10 -- # set +x 00:19:54.266 15:06:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.266 15:06:17 -- common/autotest_common.sh@862 -- # return 0 00:19:54.266 15:06:17 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:54.266 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.266 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.266 NVMe0n1 00:19:54.266 15:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.266 15:06:17 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:54.266 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.266 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.266 15:06:17 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:54.266 15:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.266 1 00:19:54.266 15:06:17 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:54.266 15:06:17 -- common/autotest_common.sh@650 -- # local es=0 00:19:54.266 15:06:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:54.266 15:06:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:54.266 15:06:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.266 15:06:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:54.266 15:06:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.266 15:06:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:54.266 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.266 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.266 2024/11/18 15:06:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:54.266 request: 00:19:54.266 { 00:19:54.266 "method": "bdev_nvme_attach_controller", 00:19:54.266 "params": { 00:19:54.266 "name": "NVMe0", 00:19:54.266 "trtype": "tcp", 00:19:54.266 "traddr": "10.0.0.2", 00:19:54.266 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:54.266 "hostaddr": "10.0.0.2", 00:19:54.267 "hostsvcid": "60000", 00:19:54.267 "adrfam": "ipv4", 00:19:54.267 "trsvcid": "4420", 00:19:54.267 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:54.267 } 00:19:54.267 } 00:19:54.267 Got JSON-RPC error response 00:19:54.267 GoRPCClient: error on JSON-RPC call 00:19:54.267 15:06:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:54.267 15:06:17 -- common/autotest_common.sh@653 -- # es=1 00:19:54.267 15:06:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.267 15:06:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.267 15:06:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.267 15:06:17 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:54.267 15:06:17 -- common/autotest_common.sh@650 -- # local es=0 00:19:54.267 15:06:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:54.267 15:06:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.267 15:06:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:54.267 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 2024/11/18 15:06:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:54.267 request: 00:19:54.267 { 00:19:54.267 "method": "bdev_nvme_attach_controller", 00:19:54.267 "params": { 00:19:54.267 "name": "NVMe0", 00:19:54.267 "trtype": "tcp", 00:19:54.267 "traddr": "10.0.0.2", 00:19:54.267 "hostaddr": "10.0.0.2", 00:19:54.267 "hostsvcid": "60000", 00:19:54.267 "adrfam": "ipv4", 00:19:54.267 "trsvcid": "4420", 00:19:54.267 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:54.267 } 00:19:54.267 } 00:19:54.267 Got JSON-RPC error response 00:19:54.267 GoRPCClient: error on JSON-RPC call 00:19:54.267 15:06:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:54.267 15:06:17 -- common/autotest_common.sh@653 -- # es=1 00:19:54.267 15:06:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.267 15:06:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.267 15:06:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.267 15:06:17 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@650 -- # local es=0 00:19:54.267 15:06:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.267 15:06:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 2024/11/18 15:06:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:54.267 request: 00:19:54.267 { 00:19:54.267 "method": "bdev_nvme_attach_controller", 00:19:54.267 "params": { 00:19:54.267 "name": "NVMe0", 00:19:54.267 "trtype": "tcp", 00:19:54.267 "traddr": "10.0.0.2", 00:19:54.267 "hostaddr": "10.0.0.2", 00:19:54.267 "hostsvcid": "60000", 00:19:54.267 "adrfam": "ipv4", 00:19:54.267 "trsvcid": "4420", 00:19:54.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.267 "multipath": "disable" 00:19:54.267 } 00:19:54.267 } 00:19:54.267 Got JSON-RPC error response 00:19:54.267 GoRPCClient: error on JSON-RPC call 00:19:54.267 15:06:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:54.267 15:06:17 -- common/autotest_common.sh@653 -- # es=1 00:19:54.267 15:06:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.267 15:06:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.267 15:06:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.267 15:06:17 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:54.267 15:06:17 -- common/autotest_common.sh@650 -- # local es=0 00:19:54.267 15:06:17 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:54.267 15:06:17 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:54.267 15:06:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.267 15:06:17 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:54.267 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 2024/11/18 15:06:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:54.267 request: 00:19:54.267 { 00:19:54.267 "method": "bdev_nvme_attach_controller", 00:19:54.267 "params": { 00:19:54.267 "name": "NVMe0", 00:19:54.267 "trtype": "tcp", 00:19:54.267 "traddr": "10.0.0.2", 00:19:54.267 "hostaddr": "10.0.0.2", 00:19:54.267 "hostsvcid": "60000", 00:19:54.267 "adrfam": "ipv4", 00:19:54.267 "trsvcid": "4420", 00:19:54.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.267 "multipath": "failover" 00:19:54.267 } 00:19:54.267 } 00:19:54.267 Got JSON-RPC error response 00:19:54.267 GoRPCClient: error on JSON-RPC call 00:19:54.267 15:06:17 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:54.267 15:06:17 -- common/autotest_common.sh@653 -- # es=1 00:19:54.267 15:06:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.267 15:06:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.267 15:06:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.267 15:06:17 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:54.267 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 00:19:54.267 15:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.267 15:06:17 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:54.267 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 15:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.267 15:06:17 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:54.267 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 00:19:54.267 15:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.267 15:06:17 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:54.267 15:06:17 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:54.267 15:06:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.267 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.267 15:06:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.267 15:06:17 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:54.267 15:06:17 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.646 0 00:19:55.646 15:06:18 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:55.646 15:06:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.646 15:06:18 -- common/autotest_common.sh@10 -- # set +x 00:19:55.646 15:06:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.646 15:06:18 -- host/multicontroller.sh@100 -- # killprocess 92854 00:19:55.646 15:06:18 -- common/autotest_common.sh@936 -- # '[' -z 92854 ']' 00:19:55.646 15:06:18 -- common/autotest_common.sh@940 -- # kill -0 92854 00:19:55.646 15:06:18 -- common/autotest_common.sh@941 -- # uname 00:19:55.646 15:06:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.646 15:06:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92854 00:19:55.646 15:06:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:55.646 15:06:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:55.646 killing process with pid 92854 00:19:55.646 15:06:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92854' 00:19:55.646 15:06:18 -- common/autotest_common.sh@955 -- # kill 92854 00:19:55.646 15:06:18 -- common/autotest_common.sh@960 -- # wait 92854 00:19:55.646 15:06:19 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.646 15:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.646 15:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:55.646 15:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.646 15:06:19 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:55.646 15:06:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.646 15:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:55.646 15:06:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.646 15:06:19 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:55.646 15:06:19 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:55.646 15:06:19 -- common/autotest_common.sh@1607 -- # read -r file 00:19:55.646 15:06:19 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:55.646 15:06:19 -- common/autotest_common.sh@1606 -- # sort -u 00:19:55.646 15:06:19 -- common/autotest_common.sh@1608 -- # cat 00:19:55.646 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:55.646 [2024-11-18 15:06:16.519078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:55.646 [2024-11-18 15:06:16.519699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92854 ] 00:19:55.646 [2024-11-18 15:06:16.655654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.646 [2024-11-18 15:06:16.724817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.646 [2024-11-18 15:06:17.760993] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name e3dae972-3f88-45de-bc73-f143120477c6 already exists 00:19:55.646 [2024-11-18 15:06:17.761044] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:e3dae972-3f88-45de-bc73-f143120477c6 alias for bdev NVMe1n1 00:19:55.646 [2024-11-18 15:06:17.761080] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:55.646 Running I/O for 1 seconds... 00:19:55.646 00:19:55.646 Latency(us) 00:19:55.646 [2024-11-18T15:06:19.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.646 [2024-11-18T15:06:19.236Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:55.646 NVMe0n1 : 1.01 22759.80 88.91 0.00 0.00 5617.36 3172.54 11200.70 00:19:55.646 [2024-11-18T15:06:19.236Z] =================================================================================================================== 00:19:55.646 [2024-11-18T15:06:19.236Z] Total : 22759.80 88.91 0.00 0.00 5617.36 3172.54 11200.70 00:19:55.646 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.646 00:19:55.646 Latency(us) 00:19:55.646 [2024-11-18T15:06:19.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.646 [2024-11-18T15:06:19.236Z] =================================================================================================================== 00:19:55.646 [2024-11-18T15:06:19.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.646 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:55.646 15:06:19 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:55.646 15:06:19 -- common/autotest_common.sh@1607 -- # read -r file 00:19:55.646 15:06:19 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:55.646 15:06:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:55.646 15:06:19 -- nvmf/common.sh@116 -- # sync 00:19:55.906 15:06:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:55.906 15:06:19 -- nvmf/common.sh@119 -- # set +e 00:19:55.906 15:06:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:55.906 15:06:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:55.906 rmmod nvme_tcp 00:19:55.906 rmmod nvme_fabrics 00:19:55.906 rmmod nvme_keyring 00:19:55.906 15:06:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:55.906 15:06:19 -- nvmf/common.sh@123 -- # set -e 00:19:55.906 15:06:19 -- nvmf/common.sh@124 -- # return 0 00:19:55.906 15:06:19 -- nvmf/common.sh@477 -- # '[' -n 92796 ']' 00:19:55.906 15:06:19 -- nvmf/common.sh@478 -- # killprocess 92796 00:19:55.906 15:06:19 -- common/autotest_common.sh@936 -- # '[' -z 92796 ']' 00:19:55.906 15:06:19 -- common/autotest_common.sh@940 -- # kill -0 92796 00:19:55.906 15:06:19 -- common/autotest_common.sh@941 -- # uname 00:19:55.906 15:06:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.906 15:06:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92796 00:19:55.906 15:06:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:55.906 killing process with pid 92796 00:19:55.906 15:06:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:55.906 15:06:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92796' 00:19:55.906 15:06:19 -- common/autotest_common.sh@955 -- # kill 92796 00:19:55.906 15:06:19 -- common/autotest_common.sh@960 -- # wait 92796 00:19:56.165 15:06:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:56.165 15:06:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:56.165 15:06:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:56.165 15:06:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.165 15:06:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:56.165 15:06:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.165 15:06:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.165 15:06:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.165 15:06:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:56.165 00:19:56.165 real 0m5.059s 00:19:56.165 user 0m15.717s 00:19:56.165 sys 0m1.161s 00:19:56.165 15:06:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.165 15:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:56.165 ************************************ 00:19:56.165 END TEST nvmf_multicontroller 00:19:56.165 ************************************ 00:19:56.425 15:06:19 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:56.425 15:06:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.425 15:06:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.425 15:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:56.425 ************************************ 00:19:56.425 START TEST nvmf_aer 00:19:56.425 ************************************ 00:19:56.425 15:06:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:56.425 * Looking for test storage... 00:19:56.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.425 15:06:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:56.425 15:06:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:56.425 15:06:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:56.425 15:06:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:56.425 15:06:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:56.425 15:06:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:56.425 15:06:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:56.425 15:06:19 -- scripts/common.sh@335 -- # IFS=.-: 00:19:56.425 15:06:19 -- scripts/common.sh@335 -- # read -ra ver1 00:19:56.425 15:06:19 -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.425 15:06:19 -- scripts/common.sh@336 -- # read -ra ver2 00:19:56.425 15:06:19 -- scripts/common.sh@337 -- # local 'op=<' 00:19:56.425 15:06:19 -- scripts/common.sh@339 -- # ver1_l=2 00:19:56.425 15:06:19 -- scripts/common.sh@340 -- # ver2_l=1 00:19:56.425 15:06:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:56.425 15:06:19 -- scripts/common.sh@343 -- # case "$op" in 00:19:56.425 15:06:19 -- scripts/common.sh@344 -- # : 1 00:19:56.425 15:06:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:56.425 15:06:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.425 15:06:19 -- scripts/common.sh@364 -- # decimal 1 00:19:56.425 15:06:19 -- scripts/common.sh@352 -- # local d=1 00:19:56.425 15:06:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.425 15:06:19 -- scripts/common.sh@354 -- # echo 1 00:19:56.425 15:06:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:56.425 15:06:19 -- scripts/common.sh@365 -- # decimal 2 00:19:56.425 15:06:19 -- scripts/common.sh@352 -- # local d=2 00:19:56.425 15:06:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.425 15:06:19 -- scripts/common.sh@354 -- # echo 2 00:19:56.425 15:06:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:56.425 15:06:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:56.425 15:06:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:56.425 15:06:19 -- scripts/common.sh@367 -- # return 0 00:19:56.425 15:06:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.425 15:06:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:56.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.425 --rc genhtml_branch_coverage=1 00:19:56.425 --rc genhtml_function_coverage=1 00:19:56.425 --rc genhtml_legend=1 00:19:56.425 --rc geninfo_all_blocks=1 00:19:56.425 --rc geninfo_unexecuted_blocks=1 00:19:56.425 00:19:56.425 ' 00:19:56.425 15:06:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:56.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.425 --rc genhtml_branch_coverage=1 00:19:56.425 --rc genhtml_function_coverage=1 00:19:56.425 --rc genhtml_legend=1 00:19:56.425 --rc geninfo_all_blocks=1 00:19:56.425 --rc geninfo_unexecuted_blocks=1 00:19:56.425 00:19:56.425 ' 00:19:56.425 15:06:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:56.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.425 --rc genhtml_branch_coverage=1 00:19:56.425 --rc genhtml_function_coverage=1 00:19:56.425 --rc genhtml_legend=1 00:19:56.425 --rc geninfo_all_blocks=1 00:19:56.425 --rc geninfo_unexecuted_blocks=1 00:19:56.425 00:19:56.425 ' 00:19:56.425 15:06:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:56.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.425 --rc genhtml_branch_coverage=1 00:19:56.425 --rc genhtml_function_coverage=1 00:19:56.425 --rc genhtml_legend=1 00:19:56.425 --rc geninfo_all_blocks=1 00:19:56.425 --rc geninfo_unexecuted_blocks=1 00:19:56.425 00:19:56.425 ' 00:19:56.425 15:06:19 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.425 15:06:19 -- nvmf/common.sh@7 -- # uname -s 00:19:56.425 15:06:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.425 15:06:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.425 15:06:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.425 15:06:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.425 15:06:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.425 15:06:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.425 15:06:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.425 15:06:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.425 15:06:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.425 15:06:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.425 15:06:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:19:56.425 15:06:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:19:56.425 15:06:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.425 15:06:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.425 15:06:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.425 15:06:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.425 15:06:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.425 15:06:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.425 15:06:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.425 15:06:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.425 15:06:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.425 15:06:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.425 15:06:19 -- paths/export.sh@5 -- # export PATH 00:19:56.425 15:06:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.425 15:06:19 -- nvmf/common.sh@46 -- # : 0 00:19:56.425 15:06:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:56.425 15:06:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:56.425 15:06:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:56.425 15:06:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.425 15:06:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.425 15:06:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:56.425 15:06:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:56.425 15:06:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:56.425 15:06:19 -- host/aer.sh@11 -- # nvmftestinit 00:19:56.425 15:06:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:56.425 15:06:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.425 15:06:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:56.425 15:06:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:56.425 15:06:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:56.425 15:06:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.425 15:06:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.425 15:06:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.425 15:06:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:56.425 15:06:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:56.425 15:06:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:56.425 15:06:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:56.425 15:06:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:56.426 15:06:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:56.426 15:06:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.426 15:06:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.426 15:06:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:56.426 15:06:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:56.426 15:06:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:56.426 15:06:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:56.426 15:06:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:56.426 15:06:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.426 15:06:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:56.426 15:06:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:56.426 15:06:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:56.426 15:06:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:56.426 15:06:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:56.426 15:06:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:56.686 Cannot find device "nvmf_tgt_br" 00:19:56.686 15:06:20 -- nvmf/common.sh@154 -- # true 00:19:56.686 15:06:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.686 Cannot find device "nvmf_tgt_br2" 00:19:56.686 15:06:20 -- nvmf/common.sh@155 -- # true 00:19:56.686 15:06:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:56.686 15:06:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:56.686 Cannot find device "nvmf_tgt_br" 00:19:56.686 15:06:20 -- nvmf/common.sh@157 -- # true 00:19:56.686 15:06:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:56.686 Cannot find device "nvmf_tgt_br2" 00:19:56.686 15:06:20 -- nvmf/common.sh@158 -- # true 00:19:56.686 15:06:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:56.686 15:06:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:56.686 15:06:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.686 15:06:20 -- nvmf/common.sh@161 -- # true 00:19:56.686 15:06:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:56.686 15:06:20 -- nvmf/common.sh@162 -- # true 00:19:56.686 15:06:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:56.686 15:06:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:56.686 15:06:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:56.686 15:06:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:56.686 15:06:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:56.686 15:06:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:56.686 15:06:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:56.686 15:06:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:56.686 15:06:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:56.686 15:06:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:56.686 15:06:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:56.686 15:06:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:56.686 15:06:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:56.686 15:06:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:56.686 15:06:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:56.686 15:06:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:56.686 15:06:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:56.686 15:06:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:56.686 15:06:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:56.686 15:06:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.686 15:06:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.686 15:06:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.686 15:06:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.686 15:06:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:56.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:19:56.686 00:19:56.686 --- 10.0.0.2 ping statistics --- 00:19:56.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.686 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:56.686 15:06:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:56.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:56.686 00:19:56.686 --- 10.0.0.3 ping statistics --- 00:19:56.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.686 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:56.686 15:06:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:19:56.686 00:19:56.686 --- 10.0.0.1 ping statistics --- 00:19:56.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.686 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:56.686 15:06:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.686 15:06:20 -- nvmf/common.sh@421 -- # return 0 00:19:56.686 15:06:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:56.686 15:06:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.686 15:06:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:56.686 15:06:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:56.686 15:06:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.686 15:06:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:56.686 15:06:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:56.946 15:06:20 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:56.946 15:06:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:56.946 15:06:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.946 15:06:20 -- common/autotest_common.sh@10 -- # set +x 00:19:56.946 15:06:20 -- nvmf/common.sh@469 -- # nvmfpid=93106 00:19:56.946 15:06:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.946 15:06:20 -- nvmf/common.sh@470 -- # waitforlisten 93106 00:19:56.946 15:06:20 -- common/autotest_common.sh@829 -- # '[' -z 93106 ']' 00:19:56.946 15:06:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.946 15:06:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.946 15:06:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.946 15:06:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.946 15:06:20 -- common/autotest_common.sh@10 -- # set +x 00:19:56.946 [2024-11-18 15:06:20.341653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:56.946 [2024-11-18 15:06:20.341731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.946 [2024-11-18 15:06:20.474252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.205 [2024-11-18 15:06:20.560175] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:57.205 [2024-11-18 15:06:20.560790] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.205 [2024-11-18 15:06:20.560946] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.205 [2024-11-18 15:06:20.561078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.205 [2024-11-18 15:06:20.561370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.205 [2024-11-18 15:06:20.561477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.205 [2024-11-18 15:06:20.561483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.205 [2024-11-18 15:06:20.561432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.141 15:06:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.141 15:06:21 -- common/autotest_common.sh@862 -- # return 0 00:19:58.141 15:06:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:58.141 15:06:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.141 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 15:06:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.141 15:06:21 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.141 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.141 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 [2024-11-18 15:06:21.436682] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.141 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.141 15:06:21 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:58.141 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.141 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 Malloc0 00:19:58.141 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.141 15:06:21 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:58.141 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.141 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.141 15:06:21 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.141 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.141 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.141 15:06:21 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.141 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.141 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 [2024-11-18 15:06:21.507621] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.141 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.142 15:06:21 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:58.142 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.142 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.142 [2024-11-18 15:06:21.515362] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:58.142 [ 00:19:58.142 { 00:19:58.142 "allow_any_host": true, 00:19:58.142 "hosts": [], 00:19:58.142 "listen_addresses": [], 00:19:58.142 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:58.142 "subtype": "Discovery" 00:19:58.142 }, 00:19:58.142 { 00:19:58.142 "allow_any_host": true, 00:19:58.142 "hosts": [], 00:19:58.142 "listen_addresses": [ 00:19:58.142 { 00:19:58.142 "adrfam": "IPv4", 00:19:58.142 "traddr": "10.0.0.2", 00:19:58.142 "transport": "TCP", 00:19:58.142 "trsvcid": "4420", 00:19:58.142 "trtype": "TCP" 00:19:58.142 } 00:19:58.142 ], 00:19:58.142 "max_cntlid": 65519, 00:19:58.142 "max_namespaces": 2, 00:19:58.142 "min_cntlid": 1, 00:19:58.142 "model_number": "SPDK bdev Controller", 00:19:58.142 "namespaces": [ 00:19:58.142 { 00:19:58.142 "bdev_name": "Malloc0", 00:19:58.142 "name": "Malloc0", 00:19:58.142 "nguid": "0AFF531DA6A746148188BA3B6C666318", 00:19:58.142 "nsid": 1, 00:19:58.142 "uuid": "0aff531d-a6a7-4614-8188-ba3b6c666318" 00:19:58.142 } 00:19:58.142 ], 00:19:58.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.142 "serial_number": "SPDK00000000000001", 00:19:58.142 "subtype": "NVMe" 00:19:58.142 } 00:19:58.142 ] 00:19:58.142 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.142 15:06:21 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:58.142 15:06:21 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:58.142 15:06:21 -- host/aer.sh@33 -- # aerpid=93166 00:19:58.142 15:06:21 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:58.142 15:06:21 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:58.142 15:06:21 -- common/autotest_common.sh@1254 -- # local i=0 00:19:58.142 15:06:21 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.142 15:06:21 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:19:58.142 15:06:21 -- common/autotest_common.sh@1257 -- # i=1 00:19:58.142 15:06:21 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:19:58.142 15:06:21 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.142 15:06:21 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:19:58.142 15:06:21 -- common/autotest_common.sh@1257 -- # i=2 00:19:58.142 15:06:21 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:19:58.402 15:06:21 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.402 15:06:21 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.402 15:06:21 -- common/autotest_common.sh@1265 -- # return 0 00:19:58.402 15:06:21 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:58.402 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.402 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.402 Malloc1 00:19:58.402 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.402 15:06:21 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:58.402 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.402 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.402 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.402 15:06:21 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:58.402 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.402 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.402 Asynchronous Event Request test 00:19:58.402 Attaching to 10.0.0.2 00:19:58.402 Attached to 10.0.0.2 00:19:58.402 Registering asynchronous event callbacks... 00:19:58.402 Starting namespace attribute notice tests for all controllers... 00:19:58.402 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:58.402 aer_cb - Changed Namespace 00:19:58.402 Cleaning up... 00:19:58.402 [ 00:19:58.402 { 00:19:58.402 "allow_any_host": true, 00:19:58.402 "hosts": [], 00:19:58.402 "listen_addresses": [], 00:19:58.402 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:58.402 "subtype": "Discovery" 00:19:58.402 }, 00:19:58.402 { 00:19:58.402 "allow_any_host": true, 00:19:58.402 "hosts": [], 00:19:58.402 "listen_addresses": [ 00:19:58.402 { 00:19:58.402 "adrfam": "IPv4", 00:19:58.402 "traddr": "10.0.0.2", 00:19:58.402 "transport": "TCP", 00:19:58.402 "trsvcid": "4420", 00:19:58.402 "trtype": "TCP" 00:19:58.402 } 00:19:58.402 ], 00:19:58.402 "max_cntlid": 65519, 00:19:58.402 "max_namespaces": 2, 00:19:58.402 "min_cntlid": 1, 00:19:58.402 "model_number": "SPDK bdev Controller", 00:19:58.402 "namespaces": [ 00:19:58.402 { 00:19:58.402 "bdev_name": "Malloc0", 00:19:58.402 "name": "Malloc0", 00:19:58.402 "nguid": "0AFF531DA6A746148188BA3B6C666318", 00:19:58.402 "nsid": 1, 00:19:58.402 "uuid": "0aff531d-a6a7-4614-8188-ba3b6c666318" 00:19:58.402 }, 00:19:58.402 { 00:19:58.402 "bdev_name": "Malloc1", 00:19:58.402 "name": "Malloc1", 00:19:58.402 "nguid": "101AAA25DEBA4AB9832B0DFBCF342925", 00:19:58.402 "nsid": 2, 00:19:58.402 "uuid": "101aaa25-deba-4ab9-832b-0dfbcf342925" 00:19:58.402 } 00:19:58.402 ], 00:19:58.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.402 "serial_number": "SPDK00000000000001", 00:19:58.402 "subtype": "NVMe" 00:19:58.402 } 00:19:58.402 ] 00:19:58.402 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.402 15:06:21 -- host/aer.sh@43 -- # wait 93166 00:19:58.402 15:06:21 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:58.402 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.402 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.402 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.402 15:06:21 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:58.402 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.402 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.402 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.402 15:06:21 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.402 15:06:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.402 15:06:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.402 15:06:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.402 15:06:21 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:58.402 15:06:21 -- host/aer.sh@51 -- # nvmftestfini 00:19:58.402 15:06:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.402 15:06:21 -- nvmf/common.sh@116 -- # sync 00:19:58.402 15:06:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.402 15:06:21 -- nvmf/common.sh@119 -- # set +e 00:19:58.402 15:06:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.402 15:06:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.402 rmmod nvme_tcp 00:19:58.662 rmmod nvme_fabrics 00:19:58.662 rmmod nvme_keyring 00:19:58.662 15:06:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:58.662 15:06:22 -- nvmf/common.sh@123 -- # set -e 00:19:58.662 15:06:22 -- nvmf/common.sh@124 -- # return 0 00:19:58.662 15:06:22 -- nvmf/common.sh@477 -- # '[' -n 93106 ']' 00:19:58.662 15:06:22 -- nvmf/common.sh@478 -- # killprocess 93106 00:19:58.662 15:06:22 -- common/autotest_common.sh@936 -- # '[' -z 93106 ']' 00:19:58.662 15:06:22 -- common/autotest_common.sh@940 -- # kill -0 93106 00:19:58.662 15:06:22 -- common/autotest_common.sh@941 -- # uname 00:19:58.662 15:06:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.662 15:06:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93106 00:19:58.662 15:06:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:58.662 15:06:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:58.662 killing process with pid 93106 00:19:58.662 15:06:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93106' 00:19:58.662 15:06:22 -- common/autotest_common.sh@955 -- # kill 93106 00:19:58.662 [2024-11-18 15:06:22.070313] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:58.662 15:06:22 -- common/autotest_common.sh@960 -- # wait 93106 00:19:58.920 15:06:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.920 15:06:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:58.920 15:06:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:58.920 15:06:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.921 15:06:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:58.921 15:06:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.921 15:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.921 15:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.921 15:06:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:58.921 ************************************ 00:19:58.921 END TEST nvmf_aer 00:19:58.921 ************************************ 00:19:58.921 00:19:58.921 real 0m2.532s 00:19:58.921 user 0m7.083s 00:19:58.921 sys 0m0.767s 00:19:58.921 15:06:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:58.921 15:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:58.921 15:06:22 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:58.921 15:06:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:58.921 15:06:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.921 15:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:58.921 ************************************ 00:19:58.921 START TEST nvmf_async_init 00:19:58.921 ************************************ 00:19:58.921 15:06:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:58.921 * Looking for test storage... 00:19:58.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:58.921 15:06:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:58.921 15:06:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:58.921 15:06:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:58.921 15:06:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:58.921 15:06:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:58.921 15:06:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:58.921 15:06:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:58.921 15:06:22 -- scripts/common.sh@335 -- # IFS=.-: 00:19:58.921 15:06:22 -- scripts/common.sh@335 -- # read -ra ver1 00:19:58.921 15:06:22 -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.921 15:06:22 -- scripts/common.sh@336 -- # read -ra ver2 00:19:58.921 15:06:22 -- scripts/common.sh@337 -- # local 'op=<' 00:19:58.921 15:06:22 -- scripts/common.sh@339 -- # ver1_l=2 00:19:58.921 15:06:22 -- scripts/common.sh@340 -- # ver2_l=1 00:19:58.921 15:06:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:58.921 15:06:22 -- scripts/common.sh@343 -- # case "$op" in 00:19:58.921 15:06:22 -- scripts/common.sh@344 -- # : 1 00:19:58.921 15:06:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:58.921 15:06:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.921 15:06:22 -- scripts/common.sh@364 -- # decimal 1 00:19:58.921 15:06:22 -- scripts/common.sh@352 -- # local d=1 00:19:58.921 15:06:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.921 15:06:22 -- scripts/common.sh@354 -- # echo 1 00:19:58.921 15:06:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:58.921 15:06:22 -- scripts/common.sh@365 -- # decimal 2 00:19:58.921 15:06:22 -- scripts/common.sh@352 -- # local d=2 00:19:58.921 15:06:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.921 15:06:22 -- scripts/common.sh@354 -- # echo 2 00:19:58.921 15:06:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:58.921 15:06:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:58.921 15:06:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:58.921 15:06:22 -- scripts/common.sh@367 -- # return 0 00:19:58.921 15:06:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.921 15:06:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.921 --rc genhtml_branch_coverage=1 00:19:58.921 --rc genhtml_function_coverage=1 00:19:58.921 --rc genhtml_legend=1 00:19:58.921 --rc geninfo_all_blocks=1 00:19:58.921 --rc geninfo_unexecuted_blocks=1 00:19:58.921 00:19:58.921 ' 00:19:58.921 15:06:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.921 --rc genhtml_branch_coverage=1 00:19:58.921 --rc genhtml_function_coverage=1 00:19:58.921 --rc genhtml_legend=1 00:19:58.921 --rc geninfo_all_blocks=1 00:19:58.921 --rc geninfo_unexecuted_blocks=1 00:19:58.921 00:19:58.921 ' 00:19:58.921 15:06:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.921 --rc genhtml_branch_coverage=1 00:19:58.921 --rc genhtml_function_coverage=1 00:19:58.921 --rc genhtml_legend=1 00:19:58.921 --rc geninfo_all_blocks=1 00:19:58.921 --rc geninfo_unexecuted_blocks=1 00:19:58.921 00:19:58.921 ' 00:19:58.921 15:06:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:58.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.921 --rc genhtml_branch_coverage=1 00:19:58.921 --rc genhtml_function_coverage=1 00:19:58.921 --rc genhtml_legend=1 00:19:58.921 --rc geninfo_all_blocks=1 00:19:58.921 --rc geninfo_unexecuted_blocks=1 00:19:58.921 00:19:58.921 ' 00:19:58.921 15:06:22 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.181 15:06:22 -- nvmf/common.sh@7 -- # uname -s 00:19:59.181 15:06:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.181 15:06:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.181 15:06:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.181 15:06:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.181 15:06:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.181 15:06:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.181 15:06:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.181 15:06:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.181 15:06:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.181 15:06:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.181 15:06:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:19:59.181 15:06:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:19:59.181 15:06:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.181 15:06:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.181 15:06:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.181 15:06:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.181 15:06:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.181 15:06:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.181 15:06:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.181 15:06:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.181 15:06:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.181 15:06:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.181 15:06:22 -- paths/export.sh@5 -- # export PATH 00:19:59.181 15:06:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.181 15:06:22 -- nvmf/common.sh@46 -- # : 0 00:19:59.181 15:06:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:59.181 15:06:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:59.181 15:06:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:59.181 15:06:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.181 15:06:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.181 15:06:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:59.181 15:06:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:59.181 15:06:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:59.181 15:06:22 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:59.181 15:06:22 -- host/async_init.sh@14 -- # null_block_size=512 00:19:59.181 15:06:22 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:59.181 15:06:22 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:59.181 15:06:22 -- host/async_init.sh@20 -- # uuidgen 00:19:59.181 15:06:22 -- host/async_init.sh@20 -- # tr -d - 00:19:59.181 15:06:22 -- host/async_init.sh@20 -- # nguid=7c12226cd26c4a1fb7d397a5bbb7deb8 00:19:59.181 15:06:22 -- host/async_init.sh@22 -- # nvmftestinit 00:19:59.181 15:06:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:59.181 15:06:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.181 15:06:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:59.181 15:06:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:59.181 15:06:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:59.181 15:06:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.181 15:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.181 15:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.181 15:06:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:59.181 15:06:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:59.181 15:06:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:59.181 15:06:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:59.181 15:06:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:59.181 15:06:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:59.181 15:06:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.181 15:06:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.181 15:06:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.182 15:06:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:59.182 15:06:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.182 15:06:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.182 15:06:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.182 15:06:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.182 15:06:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.182 15:06:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.182 15:06:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.182 15:06:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.182 15:06:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:59.182 15:06:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:59.182 Cannot find device "nvmf_tgt_br" 00:19:59.182 15:06:22 -- nvmf/common.sh@154 -- # true 00:19:59.182 15:06:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.182 Cannot find device "nvmf_tgt_br2" 00:19:59.182 15:06:22 -- nvmf/common.sh@155 -- # true 00:19:59.182 15:06:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:59.182 15:06:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:59.182 Cannot find device "nvmf_tgt_br" 00:19:59.182 15:06:22 -- nvmf/common.sh@157 -- # true 00:19:59.182 15:06:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:59.182 Cannot find device "nvmf_tgt_br2" 00:19:59.182 15:06:22 -- nvmf/common.sh@158 -- # true 00:19:59.182 15:06:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:59.182 15:06:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:59.182 15:06:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:59.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.182 15:06:22 -- nvmf/common.sh@161 -- # true 00:19:59.182 15:06:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:59.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.182 15:06:22 -- nvmf/common.sh@162 -- # true 00:19:59.182 15:06:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:59.182 15:06:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:59.182 15:06:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:59.182 15:06:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:59.182 15:06:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:59.182 15:06:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:59.182 15:06:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:59.182 15:06:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:59.182 15:06:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:59.182 15:06:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:59.182 15:06:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:59.182 15:06:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:59.182 15:06:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:59.182 15:06:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.182 15:06:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.182 15:06:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.182 15:06:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:59.441 15:06:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:59.441 15:06:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.441 15:06:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:59.441 15:06:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:59.441 15:06:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:59.441 15:06:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.441 15:06:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:59.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:19:59.441 00:19:59.441 --- 10.0.0.2 ping statistics --- 00:19:59.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.441 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:59.441 15:06:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:59.441 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.441 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:19:59.441 00:19:59.441 --- 10.0.0.3 ping statistics --- 00:19:59.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.441 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:59.441 15:06:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:59.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:19:59.441 00:19:59.441 --- 10.0.0.1 ping statistics --- 00:19:59.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.441 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:59.441 15:06:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.441 15:06:22 -- nvmf/common.sh@421 -- # return 0 00:19:59.441 15:06:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:59.441 15:06:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.441 15:06:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:59.441 15:06:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:59.441 15:06:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.441 15:06:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:59.441 15:06:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:59.441 15:06:22 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:59.441 15:06:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:59.441 15:06:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.441 15:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:59.441 15:06:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:59.441 15:06:22 -- nvmf/common.sh@469 -- # nvmfpid=93342 00:19:59.441 15:06:22 -- nvmf/common.sh@470 -- # waitforlisten 93342 00:19:59.441 15:06:22 -- common/autotest_common.sh@829 -- # '[' -z 93342 ']' 00:19:59.441 15:06:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.441 15:06:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.441 15:06:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.441 15:06:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.441 15:06:22 -- common/autotest_common.sh@10 -- # set +x 00:19:59.441 [2024-11-18 15:06:22.900597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:59.441 [2024-11-18 15:06:22.900655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.700 [2024-11-18 15:06:23.034600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.700 [2024-11-18 15:06:23.105863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:59.700 [2024-11-18 15:06:23.106020] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.700 [2024-11-18 15:06:23.106032] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.700 [2024-11-18 15:06:23.106041] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.700 [2024-11-18 15:06:23.106069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.640 15:06:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.640 15:06:23 -- common/autotest_common.sh@862 -- # return 0 00:20:00.640 15:06:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:00.640 15:06:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.640 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 15:06:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.640 15:06:23 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:00.640 15:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 [2024-11-18 15:06:23.908634] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.640 15:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.640 15:06:23 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:00.640 15:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 null0 00:20:00.640 15:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.640 15:06:23 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:00.640 15:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 15:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.640 15:06:23 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:00.640 15:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 15:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.640 15:06:23 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7c12226cd26c4a1fb7d397a5bbb7deb8 00:20:00.640 15:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 15:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.640 15:06:23 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:00.640 15:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 [2024-11-18 15:06:23.952776] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.640 15:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.640 15:06:23 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:00.640 15:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 nvme0n1 00:20:00.640 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.640 15:06:24 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:00.640 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 [ 00:20:00.640 { 00:20:00.640 "aliases": [ 00:20:00.640 "7c12226c-d26c-4a1f-b7d3-97a5bbb7deb8" 00:20:00.640 ], 00:20:00.640 "assigned_rate_limits": { 00:20:00.640 "r_mbytes_per_sec": 0, 00:20:00.640 "rw_ios_per_sec": 0, 00:20:00.640 "rw_mbytes_per_sec": 0, 00:20:00.640 "w_mbytes_per_sec": 0 00:20:00.640 }, 00:20:00.640 "block_size": 512, 00:20:00.640 "claimed": false, 00:20:00.640 "driver_specific": { 00:20:00.640 "mp_policy": "active_passive", 00:20:00.640 "nvme": [ 00:20:00.640 { 00:20:00.640 "ctrlr_data": { 00:20:00.640 "ana_reporting": false, 00:20:00.640 "cntlid": 1, 00:20:00.640 "firmware_revision": "24.01.1", 00:20:00.640 "model_number": "SPDK bdev Controller", 00:20:00.640 "multi_ctrlr": true, 00:20:00.640 "oacs": { 00:20:00.640 "firmware": 0, 00:20:00.640 "format": 0, 00:20:00.640 "ns_manage": 0, 00:20:00.640 "security": 0 00:20:00.640 }, 00:20:00.640 "serial_number": "00000000000000000000", 00:20:00.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.640 "vendor_id": "0x8086" 00:20:00.640 }, 00:20:00.640 "ns_data": { 00:20:00.640 "can_share": true, 00:20:00.640 "id": 1 00:20:00.640 }, 00:20:00.640 "trid": { 00:20:00.640 "adrfam": "IPv4", 00:20:00.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.640 "traddr": "10.0.0.2", 00:20:00.640 "trsvcid": "4420", 00:20:00.640 "trtype": "TCP" 00:20:00.640 }, 00:20:00.640 "vs": { 00:20:00.640 "nvme_version": "1.3" 00:20:00.640 } 00:20:00.640 } 00:20:00.640 ] 00:20:00.640 }, 00:20:00.640 "name": "nvme0n1", 00:20:00.640 "num_blocks": 2097152, 00:20:00.640 "product_name": "NVMe disk", 00:20:00.640 "supported_io_types": { 00:20:00.640 "abort": true, 00:20:00.640 "compare": true, 00:20:00.640 "compare_and_write": true, 00:20:00.640 "flush": true, 00:20:00.640 "nvme_admin": true, 00:20:00.640 "nvme_io": true, 00:20:00.640 "read": true, 00:20:00.640 "reset": true, 00:20:00.640 "unmap": false, 00:20:00.640 "write": true, 00:20:00.640 "write_zeroes": true 00:20:00.640 }, 00:20:00.640 "uuid": "7c12226c-d26c-4a1f-b7d3-97a5bbb7deb8", 00:20:00.640 "zoned": false 00:20:00.640 } 00:20:00.640 ] 00:20:00.640 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.640 15:06:24 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:00.640 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.640 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:00.640 [2024-11-18 15:06:24.218002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:00.640 [2024-11-18 15:06:24.218080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x907a00 (9): Bad file descriptor 00:20:00.911 [2024-11-18 15:06:24.350452] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:00.911 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.911 15:06:24 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:00.911 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.911 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 [ 00:20:00.911 { 00:20:00.911 "aliases": [ 00:20:00.911 "7c12226c-d26c-4a1f-b7d3-97a5bbb7deb8" 00:20:00.911 ], 00:20:00.911 "assigned_rate_limits": { 00:20:00.911 "r_mbytes_per_sec": 0, 00:20:00.911 "rw_ios_per_sec": 0, 00:20:00.911 "rw_mbytes_per_sec": 0, 00:20:00.911 "w_mbytes_per_sec": 0 00:20:00.911 }, 00:20:00.911 "block_size": 512, 00:20:00.911 "claimed": false, 00:20:00.911 "driver_specific": { 00:20:00.911 "mp_policy": "active_passive", 00:20:00.911 "nvme": [ 00:20:00.911 { 00:20:00.911 "ctrlr_data": { 00:20:00.911 "ana_reporting": false, 00:20:00.911 "cntlid": 2, 00:20:00.911 "firmware_revision": "24.01.1", 00:20:00.911 "model_number": "SPDK bdev Controller", 00:20:00.911 "multi_ctrlr": true, 00:20:00.911 "oacs": { 00:20:00.911 "firmware": 0, 00:20:00.911 "format": 0, 00:20:00.911 "ns_manage": 0, 00:20:00.911 "security": 0 00:20:00.911 }, 00:20:00.911 "serial_number": "00000000000000000000", 00:20:00.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.911 "vendor_id": "0x8086" 00:20:00.911 }, 00:20:00.911 "ns_data": { 00:20:00.911 "can_share": true, 00:20:00.911 "id": 1 00:20:00.911 }, 00:20:00.911 "trid": { 00:20:00.911 "adrfam": "IPv4", 00:20:00.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.911 "traddr": "10.0.0.2", 00:20:00.911 "trsvcid": "4420", 00:20:00.911 "trtype": "TCP" 00:20:00.911 }, 00:20:00.911 "vs": { 00:20:00.911 "nvme_version": "1.3" 00:20:00.911 } 00:20:00.911 } 00:20:00.911 ] 00:20:00.911 }, 00:20:00.911 "name": "nvme0n1", 00:20:00.911 "num_blocks": 2097152, 00:20:00.911 "product_name": "NVMe disk", 00:20:00.911 "supported_io_types": { 00:20:00.911 "abort": true, 00:20:00.911 "compare": true, 00:20:00.911 "compare_and_write": true, 00:20:00.911 "flush": true, 00:20:00.911 "nvme_admin": true, 00:20:00.911 "nvme_io": true, 00:20:00.911 "read": true, 00:20:00.911 "reset": true, 00:20:00.911 "unmap": false, 00:20:00.911 "write": true, 00:20:00.911 "write_zeroes": true 00:20:00.911 }, 00:20:00.911 "uuid": "7c12226c-d26c-4a1f-b7d3-97a5bbb7deb8", 00:20:00.911 "zoned": false 00:20:00.911 } 00:20:00.911 ] 00:20:00.911 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.911 15:06:24 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.911 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.911 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.911 15:06:24 -- host/async_init.sh@53 -- # mktemp 00:20:00.911 15:06:24 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.HAkM9rCoJG 00:20:00.911 15:06:24 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.911 15:06:24 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.HAkM9rCoJG 00:20:00.911 15:06:24 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:00.911 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.911 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.911 15:06:24 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:00.911 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.911 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 [2024-11-18 15:06:24.418118] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.911 [2024-11-18 15:06:24.418232] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:00.911 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.911 15:06:24 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAkM9rCoJG 00:20:00.911 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.911 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.911 15:06:24 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAkM9rCoJG 00:20:00.911 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.911 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 [2024-11-18 15:06:24.434116] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.188 nvme0n1 00:20:01.188 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.188 15:06:24 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:01.188 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.188 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:01.188 [ 00:20:01.188 { 00:20:01.188 "aliases": [ 00:20:01.188 "7c12226c-d26c-4a1f-b7d3-97a5bbb7deb8" 00:20:01.188 ], 00:20:01.188 "assigned_rate_limits": { 00:20:01.188 "r_mbytes_per_sec": 0, 00:20:01.188 "rw_ios_per_sec": 0, 00:20:01.188 "rw_mbytes_per_sec": 0, 00:20:01.188 "w_mbytes_per_sec": 0 00:20:01.188 }, 00:20:01.188 "block_size": 512, 00:20:01.188 "claimed": false, 00:20:01.188 "driver_specific": { 00:20:01.188 "mp_policy": "active_passive", 00:20:01.188 "nvme": [ 00:20:01.188 { 00:20:01.188 "ctrlr_data": { 00:20:01.188 "ana_reporting": false, 00:20:01.188 "cntlid": 3, 00:20:01.188 "firmware_revision": "24.01.1", 00:20:01.188 "model_number": "SPDK bdev Controller", 00:20:01.188 "multi_ctrlr": true, 00:20:01.188 "oacs": { 00:20:01.188 "firmware": 0, 00:20:01.188 "format": 0, 00:20:01.188 "ns_manage": 0, 00:20:01.188 "security": 0 00:20:01.188 }, 00:20:01.188 "serial_number": "00000000000000000000", 00:20:01.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.188 "vendor_id": "0x8086" 00:20:01.188 }, 00:20:01.188 "ns_data": { 00:20:01.188 "can_share": true, 00:20:01.188 "id": 1 00:20:01.188 }, 00:20:01.188 "trid": { 00:20:01.188 "adrfam": "IPv4", 00:20:01.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.188 "traddr": "10.0.0.2", 00:20:01.188 "trsvcid": "4421", 00:20:01.188 "trtype": "TCP" 00:20:01.188 }, 00:20:01.188 "vs": { 00:20:01.188 "nvme_version": "1.3" 00:20:01.188 } 00:20:01.188 } 00:20:01.188 ] 00:20:01.188 }, 00:20:01.188 "name": "nvme0n1", 00:20:01.188 "num_blocks": 2097152, 00:20:01.188 "product_name": "NVMe disk", 00:20:01.188 "supported_io_types": { 00:20:01.188 "abort": true, 00:20:01.188 "compare": true, 00:20:01.188 "compare_and_write": true, 00:20:01.188 "flush": true, 00:20:01.188 "nvme_admin": true, 00:20:01.188 "nvme_io": true, 00:20:01.188 "read": true, 00:20:01.188 "reset": true, 00:20:01.188 "unmap": false, 00:20:01.188 "write": true, 00:20:01.188 "write_zeroes": true 00:20:01.188 }, 00:20:01.188 "uuid": "7c12226c-d26c-4a1f-b7d3-97a5bbb7deb8", 00:20:01.188 "zoned": false 00:20:01.188 } 00:20:01.188 ] 00:20:01.188 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.188 15:06:24 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.188 15:06:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.188 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:01.188 15:06:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.188 15:06:24 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.HAkM9rCoJG 00:20:01.188 15:06:24 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:01.188 15:06:24 -- host/async_init.sh@78 -- # nvmftestfini 00:20:01.188 15:06:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:01.188 15:06:24 -- nvmf/common.sh@116 -- # sync 00:20:01.188 15:06:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:01.188 15:06:24 -- nvmf/common.sh@119 -- # set +e 00:20:01.188 15:06:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:01.188 15:06:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:01.188 rmmod nvme_tcp 00:20:01.188 rmmod nvme_fabrics 00:20:01.188 rmmod nvme_keyring 00:20:01.188 15:06:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:01.188 15:06:24 -- nvmf/common.sh@123 -- # set -e 00:20:01.188 15:06:24 -- nvmf/common.sh@124 -- # return 0 00:20:01.188 15:06:24 -- nvmf/common.sh@477 -- # '[' -n 93342 ']' 00:20:01.188 15:06:24 -- nvmf/common.sh@478 -- # killprocess 93342 00:20:01.188 15:06:24 -- common/autotest_common.sh@936 -- # '[' -z 93342 ']' 00:20:01.188 15:06:24 -- common/autotest_common.sh@940 -- # kill -0 93342 00:20:01.188 15:06:24 -- common/autotest_common.sh@941 -- # uname 00:20:01.188 15:06:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:01.188 15:06:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93342 00:20:01.188 15:06:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:01.188 15:06:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:01.188 killing process with pid 93342 00:20:01.188 15:06:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93342' 00:20:01.189 15:06:24 -- common/autotest_common.sh@955 -- # kill 93342 00:20:01.189 15:06:24 -- common/autotest_common.sh@960 -- # wait 93342 00:20:01.464 15:06:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:01.464 15:06:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:01.464 15:06:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:01.464 15:06:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.464 15:06:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:01.464 15:06:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.464 15:06:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.464 15:06:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.464 15:06:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:01.464 00:20:01.464 real 0m2.610s 00:20:01.464 user 0m2.415s 00:20:01.464 sys 0m0.639s 00:20:01.464 15:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:01.464 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:20:01.464 ************************************ 00:20:01.464 END TEST nvmf_async_init 00:20:01.464 ************************************ 00:20:01.464 15:06:25 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:01.464 15:06:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:01.464 15:06:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:01.464 15:06:25 -- common/autotest_common.sh@10 -- # set +x 00:20:01.464 ************************************ 00:20:01.464 START TEST dma 00:20:01.464 ************************************ 00:20:01.464 15:06:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:01.723 * Looking for test storage... 00:20:01.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:01.723 15:06:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:01.723 15:06:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:01.723 15:06:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:01.723 15:06:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:01.723 15:06:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:01.723 15:06:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:01.723 15:06:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:01.723 15:06:25 -- scripts/common.sh@335 -- # IFS=.-: 00:20:01.723 15:06:25 -- scripts/common.sh@335 -- # read -ra ver1 00:20:01.723 15:06:25 -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.723 15:06:25 -- scripts/common.sh@336 -- # read -ra ver2 00:20:01.723 15:06:25 -- scripts/common.sh@337 -- # local 'op=<' 00:20:01.723 15:06:25 -- scripts/common.sh@339 -- # ver1_l=2 00:20:01.723 15:06:25 -- scripts/common.sh@340 -- # ver2_l=1 00:20:01.723 15:06:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:01.723 15:06:25 -- scripts/common.sh@343 -- # case "$op" in 00:20:01.723 15:06:25 -- scripts/common.sh@344 -- # : 1 00:20:01.723 15:06:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:01.723 15:06:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.723 15:06:25 -- scripts/common.sh@364 -- # decimal 1 00:20:01.723 15:06:25 -- scripts/common.sh@352 -- # local d=1 00:20:01.723 15:06:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.723 15:06:25 -- scripts/common.sh@354 -- # echo 1 00:20:01.723 15:06:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:01.723 15:06:25 -- scripts/common.sh@365 -- # decimal 2 00:20:01.723 15:06:25 -- scripts/common.sh@352 -- # local d=2 00:20:01.723 15:06:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.723 15:06:25 -- scripts/common.sh@354 -- # echo 2 00:20:01.723 15:06:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:01.723 15:06:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:01.723 15:06:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:01.723 15:06:25 -- scripts/common.sh@367 -- # return 0 00:20:01.723 15:06:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.723 15:06:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:01.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.723 --rc genhtml_branch_coverage=1 00:20:01.723 --rc genhtml_function_coverage=1 00:20:01.723 --rc genhtml_legend=1 00:20:01.723 --rc geninfo_all_blocks=1 00:20:01.723 --rc geninfo_unexecuted_blocks=1 00:20:01.723 00:20:01.723 ' 00:20:01.723 15:06:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:01.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.723 --rc genhtml_branch_coverage=1 00:20:01.723 --rc genhtml_function_coverage=1 00:20:01.723 --rc genhtml_legend=1 00:20:01.723 --rc geninfo_all_blocks=1 00:20:01.723 --rc geninfo_unexecuted_blocks=1 00:20:01.723 00:20:01.723 ' 00:20:01.723 15:06:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:01.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.723 --rc genhtml_branch_coverage=1 00:20:01.723 --rc genhtml_function_coverage=1 00:20:01.723 --rc genhtml_legend=1 00:20:01.723 --rc geninfo_all_blocks=1 00:20:01.723 --rc geninfo_unexecuted_blocks=1 00:20:01.723 00:20:01.723 ' 00:20:01.723 15:06:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:01.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.723 --rc genhtml_branch_coverage=1 00:20:01.723 --rc genhtml_function_coverage=1 00:20:01.723 --rc genhtml_legend=1 00:20:01.723 --rc geninfo_all_blocks=1 00:20:01.723 --rc geninfo_unexecuted_blocks=1 00:20:01.723 00:20:01.723 ' 00:20:01.723 15:06:25 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:01.723 15:06:25 -- nvmf/common.sh@7 -- # uname -s 00:20:01.723 15:06:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.723 15:06:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.723 15:06:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.723 15:06:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.723 15:06:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.723 15:06:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.723 15:06:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.723 15:06:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.723 15:06:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.723 15:06:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.723 15:06:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:20:01.723 15:06:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:20:01.723 15:06:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.723 15:06:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.723 15:06:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:01.723 15:06:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:01.723 15:06:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.723 15:06:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.723 15:06:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.723 15:06:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.723 15:06:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.723 15:06:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.723 15:06:25 -- paths/export.sh@5 -- # export PATH 00:20:01.723 15:06:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.723 15:06:25 -- nvmf/common.sh@46 -- # : 0 00:20:01.723 15:06:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:01.723 15:06:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:01.723 15:06:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:01.723 15:06:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.723 15:06:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.723 15:06:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:01.723 15:06:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:01.724 15:06:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:01.724 15:06:25 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:01.724 15:06:25 -- host/dma.sh@13 -- # exit 0 00:20:01.724 00:20:01.724 real 0m0.208s 00:20:01.724 user 0m0.128s 00:20:01.724 sys 0m0.092s 00:20:01.724 15:06:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:01.724 15:06:25 -- common/autotest_common.sh@10 -- # set +x 00:20:01.724 ************************************ 00:20:01.724 END TEST dma 00:20:01.724 ************************************ 00:20:01.724 15:06:25 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:01.724 15:06:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:01.724 15:06:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:01.724 15:06:25 -- common/autotest_common.sh@10 -- # set +x 00:20:01.724 ************************************ 00:20:01.724 START TEST nvmf_identify 00:20:01.724 ************************************ 00:20:01.724 15:06:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:01.983 * Looking for test storage... 00:20:01.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:01.983 15:06:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:01.983 15:06:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:01.983 15:06:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:01.983 15:06:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:01.983 15:06:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:01.983 15:06:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:01.983 15:06:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:01.983 15:06:25 -- scripts/common.sh@335 -- # IFS=.-: 00:20:01.983 15:06:25 -- scripts/common.sh@335 -- # read -ra ver1 00:20:01.983 15:06:25 -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.983 15:06:25 -- scripts/common.sh@336 -- # read -ra ver2 00:20:01.983 15:06:25 -- scripts/common.sh@337 -- # local 'op=<' 00:20:01.983 15:06:25 -- scripts/common.sh@339 -- # ver1_l=2 00:20:01.983 15:06:25 -- scripts/common.sh@340 -- # ver2_l=1 00:20:01.983 15:06:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:01.983 15:06:25 -- scripts/common.sh@343 -- # case "$op" in 00:20:01.983 15:06:25 -- scripts/common.sh@344 -- # : 1 00:20:01.983 15:06:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:01.983 15:06:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.983 15:06:25 -- scripts/common.sh@364 -- # decimal 1 00:20:01.983 15:06:25 -- scripts/common.sh@352 -- # local d=1 00:20:01.983 15:06:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.983 15:06:25 -- scripts/common.sh@354 -- # echo 1 00:20:01.983 15:06:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:01.983 15:06:25 -- scripts/common.sh@365 -- # decimal 2 00:20:01.983 15:06:25 -- scripts/common.sh@352 -- # local d=2 00:20:01.983 15:06:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.983 15:06:25 -- scripts/common.sh@354 -- # echo 2 00:20:01.983 15:06:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:01.983 15:06:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:01.983 15:06:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:01.983 15:06:25 -- scripts/common.sh@367 -- # return 0 00:20:01.983 15:06:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.983 15:06:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:01.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.983 --rc genhtml_branch_coverage=1 00:20:01.983 --rc genhtml_function_coverage=1 00:20:01.983 --rc genhtml_legend=1 00:20:01.983 --rc geninfo_all_blocks=1 00:20:01.983 --rc geninfo_unexecuted_blocks=1 00:20:01.983 00:20:01.983 ' 00:20:01.983 15:06:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:01.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.983 --rc genhtml_branch_coverage=1 00:20:01.983 --rc genhtml_function_coverage=1 00:20:01.983 --rc genhtml_legend=1 00:20:01.983 --rc geninfo_all_blocks=1 00:20:01.983 --rc geninfo_unexecuted_blocks=1 00:20:01.983 00:20:01.983 ' 00:20:01.983 15:06:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:01.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.983 --rc genhtml_branch_coverage=1 00:20:01.983 --rc genhtml_function_coverage=1 00:20:01.983 --rc genhtml_legend=1 00:20:01.983 --rc geninfo_all_blocks=1 00:20:01.983 --rc geninfo_unexecuted_blocks=1 00:20:01.983 00:20:01.983 ' 00:20:01.983 15:06:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:01.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.983 --rc genhtml_branch_coverage=1 00:20:01.983 --rc genhtml_function_coverage=1 00:20:01.983 --rc genhtml_legend=1 00:20:01.983 --rc geninfo_all_blocks=1 00:20:01.983 --rc geninfo_unexecuted_blocks=1 00:20:01.983 00:20:01.983 ' 00:20:01.983 15:06:25 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:01.983 15:06:25 -- nvmf/common.sh@7 -- # uname -s 00:20:01.983 15:06:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.983 15:06:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.983 15:06:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.983 15:06:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.983 15:06:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.983 15:06:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.983 15:06:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.983 15:06:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.983 15:06:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.983 15:06:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.983 15:06:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:20:01.983 15:06:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:20:01.983 15:06:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.983 15:06:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.983 15:06:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:01.983 15:06:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:01.983 15:06:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.983 15:06:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.983 15:06:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.984 15:06:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.984 15:06:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.984 15:06:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.984 15:06:25 -- paths/export.sh@5 -- # export PATH 00:20:01.984 15:06:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.984 15:06:25 -- nvmf/common.sh@46 -- # : 0 00:20:01.984 15:06:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:01.984 15:06:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:01.984 15:06:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:01.984 15:06:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.984 15:06:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.984 15:06:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:01.984 15:06:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:01.984 15:06:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:01.984 15:06:25 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:01.984 15:06:25 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:01.984 15:06:25 -- host/identify.sh@14 -- # nvmftestinit 00:20:01.984 15:06:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:01.984 15:06:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.984 15:06:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:01.984 15:06:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:01.984 15:06:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:01.984 15:06:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.984 15:06:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.984 15:06:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.984 15:06:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:01.984 15:06:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:01.984 15:06:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:01.984 15:06:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:01.984 15:06:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:01.984 15:06:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:01.984 15:06:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.984 15:06:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.984 15:06:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:01.984 15:06:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:01.984 15:06:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:01.984 15:06:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:01.984 15:06:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:01.984 15:06:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.984 15:06:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:01.984 15:06:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:01.984 15:06:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:01.984 15:06:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:01.984 15:06:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:01.984 15:06:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:01.984 Cannot find device "nvmf_tgt_br" 00:20:01.984 15:06:25 -- nvmf/common.sh@154 -- # true 00:20:01.984 15:06:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.984 Cannot find device "nvmf_tgt_br2" 00:20:01.984 15:06:25 -- nvmf/common.sh@155 -- # true 00:20:01.984 15:06:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:01.984 15:06:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:01.984 Cannot find device "nvmf_tgt_br" 00:20:01.984 15:06:25 -- nvmf/common.sh@157 -- # true 00:20:01.984 15:06:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:01.984 Cannot find device "nvmf_tgt_br2" 00:20:01.984 15:06:25 -- nvmf/common.sh@158 -- # true 00:20:01.984 15:06:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:01.984 15:06:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:02.243 15:06:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.243 15:06:25 -- nvmf/common.sh@161 -- # true 00:20:02.243 15:06:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.243 15:06:25 -- nvmf/common.sh@162 -- # true 00:20:02.243 15:06:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:02.243 15:06:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:02.243 15:06:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:02.243 15:06:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:02.243 15:06:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:02.243 15:06:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:02.243 15:06:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:02.243 15:06:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:02.243 15:06:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:02.243 15:06:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:02.243 15:06:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:02.243 15:06:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:02.243 15:06:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:02.244 15:06:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:02.244 15:06:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:02.244 15:06:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:02.244 15:06:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:02.244 15:06:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:02.244 15:06:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:02.244 15:06:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:02.244 15:06:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:02.244 15:06:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:02.244 15:06:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:02.244 15:06:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:02.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:02.244 00:20:02.244 --- 10.0.0.2 ping statistics --- 00:20:02.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.244 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:02.244 15:06:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:02.244 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:02.244 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:20:02.244 00:20:02.244 --- 10.0.0.3 ping statistics --- 00:20:02.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.244 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:02.244 15:06:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:02.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:02.244 00:20:02.244 --- 10.0.0.1 ping statistics --- 00:20:02.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.244 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:02.244 15:06:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.244 15:06:25 -- nvmf/common.sh@421 -- # return 0 00:20:02.244 15:06:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:02.244 15:06:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.244 15:06:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:02.244 15:06:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:02.244 15:06:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.244 15:06:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:02.244 15:06:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:02.244 15:06:25 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:02.244 15:06:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.244 15:06:25 -- common/autotest_common.sh@10 -- # set +x 00:20:02.244 15:06:25 -- host/identify.sh@19 -- # nvmfpid=93626 00:20:02.244 15:06:25 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:02.244 15:06:25 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:02.244 15:06:25 -- host/identify.sh@23 -- # waitforlisten 93626 00:20:02.244 15:06:25 -- common/autotest_common.sh@829 -- # '[' -z 93626 ']' 00:20:02.244 15:06:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.244 15:06:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.244 15:06:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.244 15:06:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.244 15:06:25 -- common/autotest_common.sh@10 -- # set +x 00:20:02.503 [2024-11-18 15:06:25.867878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:02.503 [2024-11-18 15:06:25.867970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.503 [2024-11-18 15:06:26.002036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.503 [2024-11-18 15:06:26.062391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:02.503 [2024-11-18 15:06:26.062542] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.503 [2024-11-18 15:06:26.062557] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.503 [2024-11-18 15:06:26.062566] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.503 [2024-11-18 15:06:26.062686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.503 [2024-11-18 15:06:26.062878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.503 [2024-11-18 15:06:26.063251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.503 [2024-11-18 15:06:26.063262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.441 15:06:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.441 15:06:26 -- common/autotest_common.sh@862 -- # return 0 00:20:03.441 15:06:26 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.441 15:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.441 15:06:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.441 [2024-11-18 15:06:26.893124] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.441 15:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.441 15:06:26 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:03.441 15:06:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.441 15:06:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.441 15:06:26 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.441 15:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.441 15:06:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.441 Malloc0 00:20:03.441 15:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.441 15:06:26 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.441 15:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.441 15:06:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.441 15:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.441 15:06:26 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:03.441 15:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.441 15:06:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.441 15:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.441 15:06:26 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.441 15:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.441 15:06:26 -- common/autotest_common.sh@10 -- # set +x 00:20:03.441 [2024-11-18 15:06:27.002808] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.441 15:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.441 15:06:27 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:03.441 15:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.441 15:06:27 -- common/autotest_common.sh@10 -- # set +x 00:20:03.441 15:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.441 15:06:27 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:03.441 15:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.441 15:06:27 -- common/autotest_common.sh@10 -- # set +x 00:20:03.441 [2024-11-18 15:06:27.022489] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:03.441 [ 00:20:03.441 { 00:20:03.441 "allow_any_host": true, 00:20:03.441 "hosts": [], 00:20:03.441 "listen_addresses": [ 00:20:03.441 { 00:20:03.441 "adrfam": "IPv4", 00:20:03.701 "traddr": "10.0.0.2", 00:20:03.701 "transport": "TCP", 00:20:03.701 "trsvcid": "4420", 00:20:03.701 "trtype": "TCP" 00:20:03.701 } 00:20:03.701 ], 00:20:03.701 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:03.701 "subtype": "Discovery" 00:20:03.701 }, 00:20:03.701 { 00:20:03.701 "allow_any_host": true, 00:20:03.701 "hosts": [], 00:20:03.701 "listen_addresses": [ 00:20:03.701 { 00:20:03.701 "adrfam": "IPv4", 00:20:03.702 "traddr": "10.0.0.2", 00:20:03.702 "transport": "TCP", 00:20:03.702 "trsvcid": "4420", 00:20:03.702 "trtype": "TCP" 00:20:03.702 } 00:20:03.702 ], 00:20:03.702 "max_cntlid": 65519, 00:20:03.702 "max_namespaces": 32, 00:20:03.702 "min_cntlid": 1, 00:20:03.702 "model_number": "SPDK bdev Controller", 00:20:03.702 "namespaces": [ 00:20:03.702 { 00:20:03.702 "bdev_name": "Malloc0", 00:20:03.702 "eui64": "ABCDEF0123456789", 00:20:03.702 "name": "Malloc0", 00:20:03.702 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:03.702 "nsid": 1, 00:20:03.702 "uuid": "204088fe-6b16-40ea-8d0b-847829decb34" 00:20:03.702 } 00:20:03.702 ], 00:20:03.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.702 "serial_number": "SPDK00000000000001", 00:20:03.702 "subtype": "NVMe" 00:20:03.702 } 00:20:03.702 ] 00:20:03.702 15:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.702 15:06:27 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:03.702 [2024-11-18 15:06:27.058079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:03.702 [2024-11-18 15:06:27.058147] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93681 ] 00:20:03.702 [2024-11-18 15:06:27.197109] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:03.702 [2024-11-18 15:06:27.197189] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:03.702 [2024-11-18 15:06:27.197197] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:03.702 [2024-11-18 15:06:27.197207] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:03.702 [2024-11-18 15:06:27.197216] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:03.702 [2024-11-18 15:06:27.197417] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:03.702 [2024-11-18 15:06:27.197486] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1015510 0 00:20:03.702 [2024-11-18 15:06:27.211366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:03.702 [2024-11-18 15:06:27.211391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:03.702 [2024-11-18 15:06:27.211412] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:03.702 [2024-11-18 15:06:27.211416] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:03.702 [2024-11-18 15:06:27.211460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.211467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.211471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.702 [2024-11-18 15:06:27.211483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:03.702 [2024-11-18 15:06:27.211514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.702 [2024-11-18 15:06:27.219363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.702 [2024-11-18 15:06:27.219386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.702 [2024-11-18 15:06:27.219406] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.702 [2024-11-18 15:06:27.219425] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:03.702 [2024-11-18 15:06:27.219432] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:03.702 [2024-11-18 15:06:27.219437] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:03.702 [2024-11-18 15:06:27.219453] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219458] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.702 [2024-11-18 15:06:27.219470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.702 [2024-11-18 15:06:27.219499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.702 [2024-11-18 15:06:27.219575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.702 [2024-11-18 15:06:27.219581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.702 [2024-11-18 15:06:27.219584] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.702 [2024-11-18 15:06:27.219594] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:03.702 [2024-11-18 15:06:27.219601] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:03.702 [2024-11-18 15:06:27.219608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.702 [2024-11-18 15:06:27.219626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.702 [2024-11-18 15:06:27.219672] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.702 [2024-11-18 15:06:27.219722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.702 [2024-11-18 15:06:27.219729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.702 [2024-11-18 15:06:27.219732] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219736] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.702 [2024-11-18 15:06:27.219743] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:03.702 [2024-11-18 15:06:27.219751] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:03.702 [2024-11-18 15:06:27.219761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.702 [2024-11-18 15:06:27.219783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.702 [2024-11-18 15:06:27.219811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.702 [2024-11-18 15:06:27.219862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.702 [2024-11-18 15:06:27.219873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.702 [2024-11-18 15:06:27.219879] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.702 [2024-11-18 15:06:27.219894] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:03.702 [2024-11-18 15:06:27.219909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219914] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.219917] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.702 [2024-11-18 15:06:27.219925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.702 [2024-11-18 15:06:27.219946] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.702 [2024-11-18 15:06:27.219995] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.702 [2024-11-18 15:06:27.220004] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.702 [2024-11-18 15:06:27.220010] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.220015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.702 [2024-11-18 15:06:27.220024] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:03.702 [2024-11-18 15:06:27.220032] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:03.702 [2024-11-18 15:06:27.220044] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:03.702 [2024-11-18 15:06:27.220153] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:03.702 [2024-11-18 15:06:27.220161] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:03.702 [2024-11-18 15:06:27.220170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.220174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.702 [2024-11-18 15:06:27.220177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.702 [2024-11-18 15:06:27.220184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.702 [2024-11-18 15:06:27.220207] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.702 [2024-11-18 15:06:27.220271] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.702 [2024-11-18 15:06:27.220281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.702 [2024-11-18 15:06:27.220284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220288] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.703 [2024-11-18 15:06:27.220294] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:03.703 [2024-11-18 15:06:27.220304] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220308] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.220319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.703 [2024-11-18 15:06:27.220376] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.703 [2024-11-18 15:06:27.220411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.703 [2024-11-18 15:06:27.220418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.703 [2024-11-18 15:06:27.220422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.703 [2024-11-18 15:06:27.220432] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:03.703 [2024-11-18 15:06:27.220440] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:03.703 [2024-11-18 15:06:27.220452] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:03.703 [2024-11-18 15:06:27.220476] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:03.703 [2024-11-18 15:06:27.220488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220492] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.220503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.703 [2024-11-18 15:06:27.220527] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.703 [2024-11-18 15:06:27.220622] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.703 [2024-11-18 15:06:27.220631] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.703 [2024-11-18 15:06:27.220635] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220639] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1015510): datao=0, datal=4096, cccid=0 00:20:03.703 [2024-11-18 15:06:27.220644] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10618a0) on tqpair(0x1015510): expected_datao=0, payload_size=4096 00:20:03.703 [2024-11-18 15:06:27.220652] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220657] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.703 [2024-11-18 15:06:27.220671] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.703 [2024-11-18 15:06:27.220674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.703 [2024-11-18 15:06:27.220687] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:03.703 [2024-11-18 15:06:27.220692] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:03.703 [2024-11-18 15:06:27.220711] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:03.703 [2024-11-18 15:06:27.220716] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:03.703 [2024-11-18 15:06:27.220720] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:03.703 [2024-11-18 15:06:27.220725] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:03.703 [2024-11-18 15:06:27.220743] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:03.703 [2024-11-18 15:06:27.220756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220763] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220769] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.220777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.703 [2024-11-18 15:06:27.220801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.703 [2024-11-18 15:06:27.220864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.703 [2024-11-18 15:06:27.220872] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.703 [2024-11-18 15:06:27.220878] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220883] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10618a0) on tqpair=0x1015510 00:20:03.703 [2024-11-18 15:06:27.220895] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220907] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.220916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.703 [2024-11-18 15:06:27.220923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220926] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.220935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.703 [2024-11-18 15:06:27.220940] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220943] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.220978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.703 [2024-11-18 15:06:27.220987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220990] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.220994] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.220999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.703 [2024-11-18 15:06:27.221004] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:03.703 [2024-11-18 15:06:27.221019] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:03.703 [2024-11-18 15:06:27.221027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.221040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.703 [2024-11-18 15:06:27.221073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10618a0, cid 0, qid 0 00:20:03.703 [2024-11-18 15:06:27.221085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061a00, cid 1, qid 0 00:20:03.703 [2024-11-18 15:06:27.221093] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061b60, cid 2, qid 0 00:20:03.703 [2024-11-18 15:06:27.221100] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.703 [2024-11-18 15:06:27.221108] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061e20, cid 4, qid 0 00:20:03.703 [2024-11-18 15:06:27.221184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.703 [2024-11-18 15:06:27.221196] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.703 [2024-11-18 15:06:27.221202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221209] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061e20) on tqpair=0x1015510 00:20:03.703 [2024-11-18 15:06:27.221216] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:03.703 [2024-11-18 15:06:27.221221] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:03.703 [2024-11-18 15:06:27.221233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221241] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1015510) 00:20:03.703 [2024-11-18 15:06:27.221248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.703 [2024-11-18 15:06:27.221285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061e20, cid 4, qid 0 00:20:03.703 [2024-11-18 15:06:27.221380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.703 [2024-11-18 15:06:27.221391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.703 [2024-11-18 15:06:27.221394] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221397] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1015510): datao=0, datal=4096, cccid=4 00:20:03.703 [2024-11-18 15:06:27.221402] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1061e20) on tqpair(0x1015510): expected_datao=0, payload_size=4096 00:20:03.703 [2024-11-18 15:06:27.221409] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221413] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.703 [2024-11-18 15:06:27.221426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.703 [2024-11-18 15:06:27.221429] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.703 [2024-11-18 15:06:27.221433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061e20) on tqpair=0x1015510 00:20:03.703 [2024-11-18 15:06:27.221447] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:03.704 [2024-11-18 15:06:27.221501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.221512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.221516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1015510) 00:20:03.704 [2024-11-18 15:06:27.221524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.704 [2024-11-18 15:06:27.221531] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.221535] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.221538] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1015510) 00:20:03.704 [2024-11-18 15:06:27.221543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.704 [2024-11-18 15:06:27.221576] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061e20, cid 4, qid 0 00:20:03.704 [2024-11-18 15:06:27.221584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061f80, cid 5, qid 0 00:20:03.704 [2024-11-18 15:06:27.221684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.704 [2024-11-18 15:06:27.221695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.704 [2024-11-18 15:06:27.221701] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.221706] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1015510): datao=0, datal=1024, cccid=4 00:20:03.704 [2024-11-18 15:06:27.221713] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1061e20) on tqpair(0x1015510): expected_datao=0, payload_size=1024 00:20:03.704 [2024-11-18 15:06:27.221723] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.221730] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.221739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.704 [2024-11-18 15:06:27.221746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.704 [2024-11-18 15:06:27.221750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.221754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061f80) on tqpair=0x1015510 00:20:03.704 [2024-11-18 15:06:27.265391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.704 [2024-11-18 15:06:27.265413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.704 [2024-11-18 15:06:27.265418] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265437] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061e20) on tqpair=0x1015510 00:20:03.704 [2024-11-18 15:06:27.265466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265479] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1015510) 00:20:03.704 [2024-11-18 15:06:27.265488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.704 [2024-11-18 15:06:27.265522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061e20, cid 4, qid 0 00:20:03.704 [2024-11-18 15:06:27.265609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.704 [2024-11-18 15:06:27.265615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.704 [2024-11-18 15:06:27.265618] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265622] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1015510): datao=0, datal=3072, cccid=4 00:20:03.704 [2024-11-18 15:06:27.265626] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1061e20) on tqpair(0x1015510): expected_datao=0, payload_size=3072 00:20:03.704 [2024-11-18 15:06:27.265633] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265637] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.704 [2024-11-18 15:06:27.265650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.704 [2024-11-18 15:06:27.265653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265658] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061e20) on tqpair=0x1015510 00:20:03.704 [2024-11-18 15:06:27.265672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1015510) 00:20:03.704 [2024-11-18 15:06:27.265711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.704 [2024-11-18 15:06:27.265751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061e20, cid 4, qid 0 00:20:03.704 [2024-11-18 15:06:27.265826] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.704 [2024-11-18 15:06:27.265836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.704 [2024-11-18 15:06:27.265839] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265843] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1015510): datao=0, datal=8, cccid=4 00:20:03.704 [2024-11-18 15:06:27.265847] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1061e20) on tqpair(0x1015510): expected_datao=0, payload_size=8 00:20:03.704 [2024-11-18 15:06:27.265854] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.704 [2024-11-18 15:06:27.265857] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.968 [2024-11-18 15:06:27.306401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.968 [2024-11-18 15:06:27.306420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.968 [2024-11-18 15:06:27.306424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.968 [2024-11-18 15:06:27.306428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061e20) on tqpair=0x1015510 00:20:03.968 ===================================================== 00:20:03.968 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:03.968 ===================================================== 00:20:03.968 Controller Capabilities/Features 00:20:03.968 ================================ 00:20:03.968 Vendor ID: 0000 00:20:03.968 Subsystem Vendor ID: 0000 00:20:03.968 Serial Number: .................... 00:20:03.968 Model Number: ........................................ 00:20:03.968 Firmware Version: 24.01.1 00:20:03.968 Recommended Arb Burst: 0 00:20:03.968 IEEE OUI Identifier: 00 00 00 00:20:03.968 Multi-path I/O 00:20:03.968 May have multiple subsystem ports: No 00:20:03.968 May have multiple controllers: No 00:20:03.968 Associated with SR-IOV VF: No 00:20:03.968 Max Data Transfer Size: 131072 00:20:03.968 Max Number of Namespaces: 0 00:20:03.968 Max Number of I/O Queues: 1024 00:20:03.968 NVMe Specification Version (VS): 1.3 00:20:03.968 NVMe Specification Version (Identify): 1.3 00:20:03.968 Maximum Queue Entries: 128 00:20:03.968 Contiguous Queues Required: Yes 00:20:03.968 Arbitration Mechanisms Supported 00:20:03.968 Weighted Round Robin: Not Supported 00:20:03.968 Vendor Specific: Not Supported 00:20:03.968 Reset Timeout: 15000 ms 00:20:03.968 Doorbell Stride: 4 bytes 00:20:03.968 NVM Subsystem Reset: Not Supported 00:20:03.968 Command Sets Supported 00:20:03.968 NVM Command Set: Supported 00:20:03.968 Boot Partition: Not Supported 00:20:03.968 Memory Page Size Minimum: 4096 bytes 00:20:03.968 Memory Page Size Maximum: 4096 bytes 00:20:03.968 Persistent Memory Region: Not Supported 00:20:03.968 Optional Asynchronous Events Supported 00:20:03.968 Namespace Attribute Notices: Not Supported 00:20:03.968 Firmware Activation Notices: Not Supported 00:20:03.968 ANA Change Notices: Not Supported 00:20:03.968 PLE Aggregate Log Change Notices: Not Supported 00:20:03.968 LBA Status Info Alert Notices: Not Supported 00:20:03.968 EGE Aggregate Log Change Notices: Not Supported 00:20:03.968 Normal NVM Subsystem Shutdown event: Not Supported 00:20:03.969 Zone Descriptor Change Notices: Not Supported 00:20:03.969 Discovery Log Change Notices: Supported 00:20:03.969 Controller Attributes 00:20:03.969 128-bit Host Identifier: Not Supported 00:20:03.969 Non-Operational Permissive Mode: Not Supported 00:20:03.969 NVM Sets: Not Supported 00:20:03.969 Read Recovery Levels: Not Supported 00:20:03.969 Endurance Groups: Not Supported 00:20:03.969 Predictable Latency Mode: Not Supported 00:20:03.969 Traffic Based Keep ALive: Not Supported 00:20:03.969 Namespace Granularity: Not Supported 00:20:03.969 SQ Associations: Not Supported 00:20:03.969 UUID List: Not Supported 00:20:03.969 Multi-Domain Subsystem: Not Supported 00:20:03.969 Fixed Capacity Management: Not Supported 00:20:03.969 Variable Capacity Management: Not Supported 00:20:03.969 Delete Endurance Group: Not Supported 00:20:03.969 Delete NVM Set: Not Supported 00:20:03.969 Extended LBA Formats Supported: Not Supported 00:20:03.969 Flexible Data Placement Supported: Not Supported 00:20:03.969 00:20:03.969 Controller Memory Buffer Support 00:20:03.969 ================================ 00:20:03.969 Supported: No 00:20:03.969 00:20:03.969 Persistent Memory Region Support 00:20:03.969 ================================ 00:20:03.969 Supported: No 00:20:03.969 00:20:03.969 Admin Command Set Attributes 00:20:03.969 ============================ 00:20:03.969 Security Send/Receive: Not Supported 00:20:03.969 Format NVM: Not Supported 00:20:03.969 Firmware Activate/Download: Not Supported 00:20:03.969 Namespace Management: Not Supported 00:20:03.969 Device Self-Test: Not Supported 00:20:03.969 Directives: Not Supported 00:20:03.969 NVMe-MI: Not Supported 00:20:03.969 Virtualization Management: Not Supported 00:20:03.969 Doorbell Buffer Config: Not Supported 00:20:03.969 Get LBA Status Capability: Not Supported 00:20:03.969 Command & Feature Lockdown Capability: Not Supported 00:20:03.969 Abort Command Limit: 1 00:20:03.969 Async Event Request Limit: 4 00:20:03.969 Number of Firmware Slots: N/A 00:20:03.969 Firmware Slot 1 Read-Only: N/A 00:20:03.969 Firmware Activation Without Reset: N/A 00:20:03.969 Multiple Update Detection Support: N/A 00:20:03.969 Firmware Update Granularity: No Information Provided 00:20:03.969 Per-Namespace SMART Log: No 00:20:03.969 Asymmetric Namespace Access Log Page: Not Supported 00:20:03.969 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:03.969 Command Effects Log Page: Not Supported 00:20:03.969 Get Log Page Extended Data: Supported 00:20:03.969 Telemetry Log Pages: Not Supported 00:20:03.969 Persistent Event Log Pages: Not Supported 00:20:03.969 Supported Log Pages Log Page: May Support 00:20:03.969 Commands Supported & Effects Log Page: Not Supported 00:20:03.969 Feature Identifiers & Effects Log Page:May Support 00:20:03.969 NVMe-MI Commands & Effects Log Page: May Support 00:20:03.969 Data Area 4 for Telemetry Log: Not Supported 00:20:03.969 Error Log Page Entries Supported: 128 00:20:03.969 Keep Alive: Not Supported 00:20:03.969 00:20:03.969 NVM Command Set Attributes 00:20:03.969 ========================== 00:20:03.969 Submission Queue Entry Size 00:20:03.969 Max: 1 00:20:03.969 Min: 1 00:20:03.969 Completion Queue Entry Size 00:20:03.969 Max: 1 00:20:03.969 Min: 1 00:20:03.969 Number of Namespaces: 0 00:20:03.969 Compare Command: Not Supported 00:20:03.969 Write Uncorrectable Command: Not Supported 00:20:03.969 Dataset Management Command: Not Supported 00:20:03.969 Write Zeroes Command: Not Supported 00:20:03.969 Set Features Save Field: Not Supported 00:20:03.969 Reservations: Not Supported 00:20:03.969 Timestamp: Not Supported 00:20:03.969 Copy: Not Supported 00:20:03.969 Volatile Write Cache: Not Present 00:20:03.969 Atomic Write Unit (Normal): 1 00:20:03.969 Atomic Write Unit (PFail): 1 00:20:03.969 Atomic Compare & Write Unit: 1 00:20:03.969 Fused Compare & Write: Supported 00:20:03.969 Scatter-Gather List 00:20:03.969 SGL Command Set: Supported 00:20:03.969 SGL Keyed: Supported 00:20:03.969 SGL Bit Bucket Descriptor: Not Supported 00:20:03.969 SGL Metadata Pointer: Not Supported 00:20:03.969 Oversized SGL: Not Supported 00:20:03.969 SGL Metadata Address: Not Supported 00:20:03.969 SGL Offset: Supported 00:20:03.969 Transport SGL Data Block: Not Supported 00:20:03.969 Replay Protected Memory Block: Not Supported 00:20:03.969 00:20:03.969 Firmware Slot Information 00:20:03.969 ========================= 00:20:03.969 Active slot: 0 00:20:03.969 00:20:03.969 00:20:03.969 Error Log 00:20:03.969 ========= 00:20:03.969 00:20:03.969 Active Namespaces 00:20:03.969 ================= 00:20:03.969 Discovery Log Page 00:20:03.969 ================== 00:20:03.969 Generation Counter: 2 00:20:03.969 Number of Records: 2 00:20:03.969 Record Format: 0 00:20:03.969 00:20:03.969 Discovery Log Entry 0 00:20:03.969 ---------------------- 00:20:03.969 Transport Type: 3 (TCP) 00:20:03.969 Address Family: 1 (IPv4) 00:20:03.969 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:03.969 Entry Flags: 00:20:03.969 Duplicate Returned Information: 1 00:20:03.969 Explicit Persistent Connection Support for Discovery: 1 00:20:03.969 Transport Requirements: 00:20:03.969 Secure Channel: Not Required 00:20:03.969 Port ID: 0 (0x0000) 00:20:03.969 Controller ID: 65535 (0xffff) 00:20:03.969 Admin Max SQ Size: 128 00:20:03.969 Transport Service Identifier: 4420 00:20:03.969 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:03.969 Transport Address: 10.0.0.2 00:20:03.969 Discovery Log Entry 1 00:20:03.969 ---------------------- 00:20:03.969 Transport Type: 3 (TCP) 00:20:03.969 Address Family: 1 (IPv4) 00:20:03.969 Subsystem Type: 2 (NVM Subsystem) 00:20:03.969 Entry Flags: 00:20:03.969 Duplicate Returned Information: 0 00:20:03.969 Explicit Persistent Connection Support for Discovery: 0 00:20:03.969 Transport Requirements: 00:20:03.969 Secure Channel: Not Required 00:20:03.969 Port ID: 0 (0x0000) 00:20:03.969 Controller ID: 65535 (0xffff) 00:20:03.969 Admin Max SQ Size: 128 00:20:03.969 Transport Service Identifier: 4420 00:20:03.969 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:03.969 Transport Address: 10.0.0.2 [2024-11-18 15:06:27.306517] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:03.969 [2024-11-18 15:06:27.306533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.969 [2024-11-18 15:06:27.306539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.969 [2024-11-18 15:06:27.306545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.969 [2024-11-18 15:06:27.306550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.969 [2024-11-18 15:06:27.306559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.969 [2024-11-18 15:06:27.306563] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.969 [2024-11-18 15:06:27.306566] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.969 [2024-11-18 15:06:27.306574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.969 [2024-11-18 15:06:27.306599] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.969 [2024-11-18 15:06:27.306661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.969 [2024-11-18 15:06:27.306673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.969 [2024-11-18 15:06:27.306678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.969 [2024-11-18 15:06:27.306681] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.969 [2024-11-18 15:06:27.306690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.969 [2024-11-18 15:06:27.306694] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.969 [2024-11-18 15:06:27.306698] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.969 [2024-11-18 15:06:27.306705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.969 [2024-11-18 15:06:27.306733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.969 [2024-11-18 15:06:27.306801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.969 [2024-11-18 15:06:27.306812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.969 [2024-11-18 15:06:27.306817] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.969 [2024-11-18 15:06:27.306823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.969 [2024-11-18 15:06:27.306831] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:03.969 [2024-11-18 15:06:27.306838] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:03.970 [2024-11-18 15:06:27.306853] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.306861] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.306867] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.306874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.306898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.306953] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.306961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.306967] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.306972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.306989] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.306997] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.307013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.307044] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.307093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.307100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.307104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307110] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.307127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.307145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.307166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.307215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.307223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.307226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.307241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307248] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.307255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.307278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.307398] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.307409] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.307413] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.307429] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307433] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307437] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.307445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.307468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.307523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.307532] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.307535] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307539] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.307550] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307555] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307558] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.307565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.307587] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.307644] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.307653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.307656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.307672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.307687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.307708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.307769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.307779] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.307785] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307790] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.307806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307816] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.307823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.307843] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.307893] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.307901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.307905] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307908] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.307921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307929] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.307935] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.307945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.307974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.308022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.308031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.308037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308043] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.308059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308064] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.308074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.308097] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.308146] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.308154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.308158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308161] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.308172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308176] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308179] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.308186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.308213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.308261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.308272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.970 [2024-11-18 15:06:27.308277] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308280] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.970 [2024-11-18 15:06:27.308291] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308296] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.970 [2024-11-18 15:06:27.308300] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.970 [2024-11-18 15:06:27.308307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.970 [2024-11-18 15:06:27.308327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.970 [2024-11-18 15:06:27.308381] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.970 [2024-11-18 15:06:27.308393] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.308400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.308420] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308428] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.971 [2024-11-18 15:06:27.308435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.971 [2024-11-18 15:06:27.308457] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.971 [2024-11-18 15:06:27.308515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.308526] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.308531] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308537] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.308552] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308560] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.971 [2024-11-18 15:06:27.308567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.971 [2024-11-18 15:06:27.308588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.971 [2024-11-18 15:06:27.308640] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.308652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.308656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.308671] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.971 [2024-11-18 15:06:27.308686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.971 [2024-11-18 15:06:27.308708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.971 [2024-11-18 15:06:27.308762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.308771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.308774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308777] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.308788] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.971 [2024-11-18 15:06:27.308802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.971 [2024-11-18 15:06:27.308823] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.971 [2024-11-18 15:06:27.308869] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.308878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.308881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.308895] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308899] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.308903] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.971 [2024-11-18 15:06:27.308910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.971 [2024-11-18 15:06:27.308930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.971 [2024-11-18 15:06:27.309010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.309023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.309028] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309032] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.309043] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309047] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309051] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.971 [2024-11-18 15:06:27.309058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.971 [2024-11-18 15:06:27.309079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.971 [2024-11-18 15:06:27.309141] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.309152] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.309156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309160] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.309171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.971 [2024-11-18 15:06:27.309185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.971 [2024-11-18 15:06:27.309209] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.971 [2024-11-18 15:06:27.309263] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.309272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.309275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309278] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.309289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.309297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1015510) 00:20:03.971 [2024-11-18 15:06:27.309304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.971 [2024-11-18 15:06:27.313371] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1061cc0, cid 3, qid 0 00:20:03.971 [2024-11-18 15:06:27.313436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.313445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.313448] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.313452] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1061cc0) on tqpair=0x1015510 00:20:03.971 [2024-11-18 15:06:27.313461] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:03.971 00:20:03.971 15:06:27 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:03.971 [2024-11-18 15:06:27.348981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:03.971 [2024-11-18 15:06:27.349047] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93684 ] 00:20:03.971 [2024-11-18 15:06:27.486497] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:03.971 [2024-11-18 15:06:27.486564] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:03.971 [2024-11-18 15:06:27.486571] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:03.971 [2024-11-18 15:06:27.486580] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:03.971 [2024-11-18 15:06:27.486587] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:03.971 [2024-11-18 15:06:27.486675] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:03.971 [2024-11-18 15:06:27.486717] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7cb510 0 00:20:03.971 [2024-11-18 15:06:27.501402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:03.971 [2024-11-18 15:06:27.501426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:03.971 [2024-11-18 15:06:27.501448] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:03.971 [2024-11-18 15:06:27.501451] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:03.971 [2024-11-18 15:06:27.501487] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.501494] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.501497] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.971 [2024-11-18 15:06:27.501507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:03.971 [2024-11-18 15:06:27.501549] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.971 [2024-11-18 15:06:27.508336] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.971 [2024-11-18 15:06:27.508359] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.971 [2024-11-18 15:06:27.508380] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.971 [2024-11-18 15:06:27.508384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.971 [2024-11-18 15:06:27.508393] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:03.971 [2024-11-18 15:06:27.508400] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:03.971 [2024-11-18 15:06:27.508406] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:03.972 [2024-11-18 15:06:27.508420] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508428] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.972 [2024-11-18 15:06:27.508437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.972 [2024-11-18 15:06:27.508465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.972 [2024-11-18 15:06:27.508537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.972 [2024-11-18 15:06:27.508544] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.972 [2024-11-18 15:06:27.508547] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508551] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.972 [2024-11-18 15:06:27.508556] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:03.972 [2024-11-18 15:06:27.508563] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:03.972 [2024-11-18 15:06:27.508571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508578] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.972 [2024-11-18 15:06:27.508585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.972 [2024-11-18 15:06:27.508607] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.972 [2024-11-18 15:06:27.508669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.972 [2024-11-18 15:06:27.508680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.972 [2024-11-18 15:06:27.508684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.972 [2024-11-18 15:06:27.508693] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:03.972 [2024-11-18 15:06:27.508702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:03.972 [2024-11-18 15:06:27.508710] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.972 [2024-11-18 15:06:27.508725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.972 [2024-11-18 15:06:27.508747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.972 [2024-11-18 15:06:27.508801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.972 [2024-11-18 15:06:27.508810] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.972 [2024-11-18 15:06:27.508813] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508817] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.972 [2024-11-18 15:06:27.508822] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:03.972 [2024-11-18 15:06:27.508833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.972 [2024-11-18 15:06:27.508848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.972 [2024-11-18 15:06:27.508872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.972 [2024-11-18 15:06:27.508922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.972 [2024-11-18 15:06:27.508931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.972 [2024-11-18 15:06:27.508935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.508939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.972 [2024-11-18 15:06:27.508944] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:03.972 [2024-11-18 15:06:27.508949] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:03.972 [2024-11-18 15:06:27.508984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:03.972 [2024-11-18 15:06:27.509091] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:03.972 [2024-11-18 15:06:27.509103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:03.972 [2024-11-18 15:06:27.509114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.972 [2024-11-18 15:06:27.509130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.972 [2024-11-18 15:06:27.509154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.972 [2024-11-18 15:06:27.509214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.972 [2024-11-18 15:06:27.509226] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.972 [2024-11-18 15:06:27.509231] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509234] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.972 [2024-11-18 15:06:27.509239] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:03.972 [2024-11-18 15:06:27.509251] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509256] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509259] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.972 [2024-11-18 15:06:27.509267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.972 [2024-11-18 15:06:27.509307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.972 [2024-11-18 15:06:27.509386] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.972 [2024-11-18 15:06:27.509396] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.972 [2024-11-18 15:06:27.509399] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509403] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.972 [2024-11-18 15:06:27.509408] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:03.972 [2024-11-18 15:06:27.509413] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:03.972 [2024-11-18 15:06:27.509421] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:03.972 [2024-11-18 15:06:27.509435] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:03.972 [2024-11-18 15:06:27.509446] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.972 [2024-11-18 15:06:27.509470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.972 [2024-11-18 15:06:27.509501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.972 [2024-11-18 15:06:27.509587] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.972 [2024-11-18 15:06:27.509598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.972 [2024-11-18 15:06:27.509602] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509606] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb510): datao=0, datal=4096, cccid=0 00:20:03.972 [2024-11-18 15:06:27.509611] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8178a0) on tqpair(0x7cb510): expected_datao=0, payload_size=4096 00:20:03.972 [2024-11-18 15:06:27.509619] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509623] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.972 [2024-11-18 15:06:27.509637] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.972 [2024-11-18 15:06:27.509641] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509644] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.972 [2024-11-18 15:06:27.509652] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:03.972 [2024-11-18 15:06:27.509658] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:03.972 [2024-11-18 15:06:27.509665] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:03.972 [2024-11-18 15:06:27.509671] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:03.972 [2024-11-18 15:06:27.509677] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:03.972 [2024-11-18 15:06:27.509684] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:03.972 [2024-11-18 15:06:27.509704] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:03.972 [2024-11-18 15:06:27.509729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.972 [2024-11-18 15:06:27.509736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.972 [2024-11-18 15:06:27.509744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.972 [2024-11-18 15:06:27.509768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.972 [2024-11-18 15:06:27.509824] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.972 [2024-11-18 15:06:27.509835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.973 [2024-11-18 15:06:27.509839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8178a0) on tqpair=0x7cb510 00:20:03.973 [2024-11-18 15:06:27.509850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509856] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb510) 00:20:03.973 [2024-11-18 15:06:27.509871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.973 [2024-11-18 15:06:27.509881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509887] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509892] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7cb510) 00:20:03.973 [2024-11-18 15:06:27.509900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.973 [2024-11-18 15:06:27.509910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7cb510) 00:20:03.973 [2024-11-18 15:06:27.509925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.973 [2024-11-18 15:06:27.509931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb510) 00:20:03.973 [2024-11-18 15:06:27.509943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.973 [2024-11-18 15:06:27.509948] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.509962] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.509973] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.509986] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb510) 00:20:03.973 [2024-11-18 15:06:27.509996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.973 [2024-11-18 15:06:27.510027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8178a0, cid 0, qid 0 00:20:03.973 [2024-11-18 15:06:27.510037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817a00, cid 1, qid 0 00:20:03.973 [2024-11-18 15:06:27.510044] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817b60, cid 2, qid 0 00:20:03.973 [2024-11-18 15:06:27.510051] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817cc0, cid 3, qid 0 00:20:03.973 [2024-11-18 15:06:27.510058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817e20, cid 4, qid 0 00:20:03.973 [2024-11-18 15:06:27.510130] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.973 [2024-11-18 15:06:27.510136] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.973 [2024-11-18 15:06:27.510140] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510143] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817e20) on tqpair=0x7cb510 00:20:03.973 [2024-11-18 15:06:27.510148] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:03.973 [2024-11-18 15:06:27.510155] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.510167] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.510183] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.510195] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510199] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb510) 00:20:03.973 [2024-11-18 15:06:27.510210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.973 [2024-11-18 15:06:27.510233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817e20, cid 4, qid 0 00:20:03.973 [2024-11-18 15:06:27.510293] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.973 [2024-11-18 15:06:27.510303] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.973 [2024-11-18 15:06:27.510309] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510314] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817e20) on tqpair=0x7cb510 00:20:03.973 [2024-11-18 15:06:27.510388] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.510404] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.510412] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510416] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb510) 00:20:03.973 [2024-11-18 15:06:27.510427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.973 [2024-11-18 15:06:27.510457] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817e20, cid 4, qid 0 00:20:03.973 [2024-11-18 15:06:27.510523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.973 [2024-11-18 15:06:27.510531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.973 [2024-11-18 15:06:27.510535] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510538] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb510): datao=0, datal=4096, cccid=4 00:20:03.973 [2024-11-18 15:06:27.510543] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x817e20) on tqpair(0x7cb510): expected_datao=0, payload_size=4096 00:20:03.973 [2024-11-18 15:06:27.510553] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510560] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510572] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.973 [2024-11-18 15:06:27.510581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.973 [2024-11-18 15:06:27.510586] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510590] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817e20) on tqpair=0x7cb510 00:20:03.973 [2024-11-18 15:06:27.510607] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:03.973 [2024-11-18 15:06:27.510624] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.510640] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:03.973 [2024-11-18 15:06:27.510650] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510654] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510658] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb510) 00:20:03.973 [2024-11-18 15:06:27.510665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.973 [2024-11-18 15:06:27.510687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817e20, cid 4, qid 0 00:20:03.973 [2024-11-18 15:06:27.510764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.973 [2024-11-18 15:06:27.510776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.973 [2024-11-18 15:06:27.510782] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510788] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb510): datao=0, datal=4096, cccid=4 00:20:03.973 [2024-11-18 15:06:27.510795] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x817e20) on tqpair(0x7cb510): expected_datao=0, payload_size=4096 00:20:03.973 [2024-11-18 15:06:27.510805] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510809] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.973 [2024-11-18 15:06:27.510824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.973 [2024-11-18 15:06:27.510827] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.973 [2024-11-18 15:06:27.510830] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817e20) on tqpair=0x7cb510 00:20:03.974 [2024-11-18 15:06:27.510846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:03.974 [2024-11-18 15:06:27.510858] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:03.974 [2024-11-18 15:06:27.510868] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.510875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.510880] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.510890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.510919] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817e20, cid 4, qid 0 00:20:03.974 [2024-11-18 15:06:27.510982] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.974 [2024-11-18 15:06:27.510991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.974 [2024-11-18 15:06:27.510995] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.510998] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb510): datao=0, datal=4096, cccid=4 00:20:03.974 [2024-11-18 15:06:27.511002] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x817e20) on tqpair(0x7cb510): expected_datao=0, payload_size=4096 00:20:03.974 [2024-11-18 15:06:27.511009] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511013] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.974 [2024-11-18 15:06:27.511027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.974 [2024-11-18 15:06:27.511030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511034] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817e20) on tqpair=0x7cb510 00:20:03.974 [2024-11-18 15:06:27.511042] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:03.974 [2024-11-18 15:06:27.511051] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:03.974 [2024-11-18 15:06:27.511061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:03.974 [2024-11-18 15:06:27.511068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:03.974 [2024-11-18 15:06:27.511074] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:03.974 [2024-11-18 15:06:27.511081] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:03.974 [2024-11-18 15:06:27.511088] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:03.974 [2024-11-18 15:06:27.511095] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:03.974 [2024-11-18 15:06:27.511122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.511146] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511150] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511153] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.974 [2024-11-18 15:06:27.511187] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817e20, cid 4, qid 0 00:20:03.974 [2024-11-18 15:06:27.511195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817f80, cid 5, qid 0 00:20:03.974 [2024-11-18 15:06:27.511265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.974 [2024-11-18 15:06:27.511274] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.974 [2024-11-18 15:06:27.511277] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511281] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817e20) on tqpair=0x7cb510 00:20:03.974 [2024-11-18 15:06:27.511288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.974 [2024-11-18 15:06:27.511293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.974 [2024-11-18 15:06:27.511296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511299] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817f80) on tqpair=0x7cb510 00:20:03.974 [2024-11-18 15:06:27.511310] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511342] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.511383] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817f80, cid 5, qid 0 00:20:03.974 [2024-11-18 15:06:27.511447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.974 [2024-11-18 15:06:27.511457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.974 [2024-11-18 15:06:27.511463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817f80) on tqpair=0x7cb510 00:20:03.974 [2024-11-18 15:06:27.511485] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511497] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.511534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817f80, cid 5, qid 0 00:20:03.974 [2024-11-18 15:06:27.511584] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.974 [2024-11-18 15:06:27.511592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.974 [2024-11-18 15:06:27.511595] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817f80) on tqpair=0x7cb510 00:20:03.974 [2024-11-18 15:06:27.511609] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511613] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511617] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.511645] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817f80, cid 5, qid 0 00:20:03.974 [2024-11-18 15:06:27.511699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.974 [2024-11-18 15:06:27.511711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.974 [2024-11-18 15:06:27.511714] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511718] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817f80) on tqpair=0x7cb510 00:20:03.974 [2024-11-18 15:06:27.511732] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511737] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.511753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.511776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511789] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.511809] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511813] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.511816] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7cb510) 00:20:03.974 [2024-11-18 15:06:27.511822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.974 [2024-11-18 15:06:27.511855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817f80, cid 5, qid 0 00:20:03.974 [2024-11-18 15:06:27.511866] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817e20, cid 4, qid 0 00:20:03.974 [2024-11-18 15:06:27.511871] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8180e0, cid 6, qid 0 00:20:03.974 [2024-11-18 15:06:27.511876] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x818240, cid 7, qid 0 00:20:03.974 [2024-11-18 15:06:27.511997] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.974 [2024-11-18 15:06:27.512009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.974 [2024-11-18 15:06:27.512012] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.512016] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb510): datao=0, datal=8192, cccid=5 00:20:03.974 [2024-11-18 15:06:27.512020] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x817f80) on tqpair(0x7cb510): expected_datao=0, payload_size=8192 00:20:03.974 [2024-11-18 15:06:27.512038] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.512043] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.974 [2024-11-18 15:06:27.512048] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.975 [2024-11-18 15:06:27.512054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.975 [2024-11-18 15:06:27.512057] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512060] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb510): datao=0, datal=512, cccid=4 00:20:03.975 [2024-11-18 15:06:27.512064] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x817e20) on tqpair(0x7cb510): expected_datao=0, payload_size=512 00:20:03.975 [2024-11-18 15:06:27.512070] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512074] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.975 [2024-11-18 15:06:27.512088] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.975 [2024-11-18 15:06:27.512093] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512098] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb510): datao=0, datal=512, cccid=6 00:20:03.975 [2024-11-18 15:06:27.512105] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8180e0) on tqpair(0x7cb510): expected_datao=0, payload_size=512 00:20:03.975 [2024-11-18 15:06:27.512114] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512118] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:03.975 [2024-11-18 15:06:27.512128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:03.975 [2024-11-18 15:06:27.512132] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512135] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb510): datao=0, datal=4096, cccid=7 00:20:03.975 [2024-11-18 15:06:27.512139] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x818240) on tqpair(0x7cb510): expected_datao=0, payload_size=4096 00:20:03.975 [2024-11-18 15:06:27.512145] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512149] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512156] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.975 [2024-11-18 15:06:27.512164] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.975 [2024-11-18 15:06:27.512170] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512176] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817f80) on tqpair=0x7cb510 00:20:03.975 [2024-11-18 15:06:27.512194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.975 [2024-11-18 15:06:27.512201] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.975 [2024-11-18 15:06:27.512204] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512208] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817e20) on tqpair=0x7cb510 00:20:03.975 [2024-11-18 15:06:27.512217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.975 [2024-11-18 15:06:27.512223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.975 [2024-11-18 15:06:27.512227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.975 [2024-11-18 15:06:27.512230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8180e0) on tqpair=0x7cb510 00:20:03.975 [2024-11-18 15:06:27.512237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.975 [2024-11-18 15:06:27.512242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.975 [2024-11-18 15:06:27.512245] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.975 ===================================================== 00:20:03.975 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.975 ===================================================== 00:20:03.975 Controller Capabilities/Features 00:20:03.975 ================================ 00:20:03.975 Vendor ID: 8086 00:20:03.975 Subsystem Vendor ID: 8086 00:20:03.975 Serial Number: SPDK00000000000001 00:20:03.975 Model Number: SPDK bdev Controller 00:20:03.975 Firmware Version: 24.01.1 00:20:03.975 Recommended Arb Burst: 6 00:20:03.975 IEEE OUI Identifier: e4 d2 5c 00:20:03.975 Multi-path I/O 00:20:03.975 May have multiple subsystem ports: Yes 00:20:03.975 May have multiple controllers: Yes 00:20:03.975 Associated with SR-IOV VF: No 00:20:03.975 Max Data Transfer Size: 131072 00:20:03.975 Max Number of Namespaces: 32 00:20:03.975 Max Number of I/O Queues: 127 00:20:03.975 NVMe Specification Version (VS): 1.3 00:20:03.975 NVMe Specification Version (Identify): 1.3 00:20:03.975 Maximum Queue Entries: 128 00:20:03.975 Contiguous Queues Required: Yes 00:20:03.975 Arbitration Mechanisms Supported 00:20:03.975 Weighted Round Robin: Not Supported 00:20:03.975 Vendor Specific: Not Supported 00:20:03.975 Reset Timeout: 15000 ms 00:20:03.975 Doorbell Stride: 4 bytes 00:20:03.975 NVM Subsystem Reset: Not Supported 00:20:03.975 Command Sets Supported 00:20:03.975 NVM Command Set: Supported 00:20:03.975 Boot Partition: Not Supported 00:20:03.975 Memory Page Size Minimum: 4096 bytes 00:20:03.975 Memory Page Size Maximum: 4096 bytes 00:20:03.975 Persistent Memory Region: Not Supported 00:20:03.975 Optional Asynchronous Events Supported 00:20:03.975 Namespace Attribute Notices: Supported 00:20:03.975 Firmware Activation Notices: Not Supported 00:20:03.975 ANA Change Notices: Not Supported 00:20:03.975 PLE Aggregate Log Change Notices: Not Supported 00:20:03.975 LBA Status Info Alert Notices: Not Supported 00:20:03.975 EGE Aggregate Log Change Notices: Not Supported 00:20:03.975 Normal NVM Subsystem Shutdown event: Not Supported 00:20:03.975 Zone Descriptor Change Notices: Not Supported 00:20:03.975 Discovery Log Change Notices: Not Supported 00:20:03.975 Controller Attributes 00:20:03.975 128-bit Host Identifier: Supported 00:20:03.975 Non-Operational Permissive Mode: Not Supported 00:20:03.975 NVM Sets: Not Supported 00:20:03.975 Read Recovery Levels: Not Supported 00:20:03.975 Endurance Groups: Not Supported 00:20:03.975 Predictable Latency Mode: Not Supported 00:20:03.975 Traffic Based Keep ALive: Not Supported 00:20:03.975 Namespace Granularity: Not Supported 00:20:03.975 SQ Associations: Not Supported 00:20:03.975 UUID List: Not Supported 00:20:03.975 Multi-Domain Subsystem: Not Supported 00:20:03.975 Fixed Capacity Management: Not Supported 00:20:03.975 Variable Capacity Management: Not Supported 00:20:03.975 Delete Endurance Group: Not Supported 00:20:03.975 Delete NVM Set: Not Supported 00:20:03.975 Extended LBA Formats Supported: Not Supported 00:20:03.975 Flexible Data Placement Supported: Not Supported 00:20:03.975 00:20:03.975 Controller Memory Buffer Support 00:20:03.975 ================================ 00:20:03.975 Supported: No 00:20:03.975 00:20:03.975 Persistent Memory Region Support 00:20:03.975 ================================ 00:20:03.975 Supported: No 00:20:03.975 00:20:03.975 Admin Command Set Attributes 00:20:03.975 ============================ 00:20:03.975 Security Send/Receive: Not Supported 00:20:03.975 Format NVM: Not Supported 00:20:03.975 Firmware Activate/Download: Not Supported 00:20:03.975 Namespace Management: Not Supported 00:20:03.975 Device Self-Test: Not Supported 00:20:03.975 Directives: Not Supported 00:20:03.975 NVMe-MI: Not Supported 00:20:03.975 Virtualization Management: Not Supported 00:20:03.975 Doorbell Buffer Config: Not Supported 00:20:03.975 Get LBA Status Capability: Not Supported 00:20:03.975 Command & Feature Lockdown Capability: Not Supported 00:20:03.975 Abort Command Limit: 4 00:20:03.975 Async Event Request Limit: 4 00:20:03.975 Number of Firmware Slots: N/A 00:20:03.975 Firmware Slot 1 Read-Only: N/A 00:20:03.975 Firmware Activation Without Reset: N/A 00:20:03.975 Multiple Update Detection Support: N/A 00:20:03.975 Firmware Update Granularity: No Information Provided 00:20:03.975 Per-Namespace SMART Log: No 00:20:03.975 Asymmetric Namespace Access Log Page: Not Supported 00:20:03.975 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:03.975 Command Effects Log Page: Supported 00:20:03.975 Get Log Page Extended Data: Supported 00:20:03.975 Telemetry Log Pages: Not Supported 00:20:03.975 Persistent Event Log Pages: Not Supported 00:20:03.975 Supported Log Pages Log Page: May Support 00:20:03.975 Commands Supported & Effects Log Page: Not Supported 00:20:03.975 Feature Identifiers & Effects Log Page:May Support 00:20:03.975 NVMe-MI Commands & Effects Log Page: May Support 00:20:03.975 Data Area 4 for Telemetry Log: Not Supported 00:20:03.975 Error Log Page Entries Supported: 128 00:20:03.975 Keep Alive: Supported 00:20:03.975 Keep Alive Granularity: 10000 ms 00:20:03.975 00:20:03.975 NVM Command Set Attributes 00:20:03.975 ========================== 00:20:03.975 Submission Queue Entry Size 00:20:03.975 Max: 64 00:20:03.975 Min: 64 00:20:03.975 Completion Queue Entry Size 00:20:03.975 Max: 16 00:20:03.975 Min: 16 00:20:03.975 Number of Namespaces: 32 00:20:03.975 Compare Command: Supported 00:20:03.975 Write Uncorrectable Command: Not Supported 00:20:03.975 Dataset Management Command: Supported 00:20:03.975 Write Zeroes Command: Supported 00:20:03.975 Set Features Save Field: Not Supported 00:20:03.975 Reservations: Supported 00:20:03.975 Timestamp: Not Supported 00:20:03.975 Copy: Supported 00:20:03.975 Volatile Write Cache: Present 00:20:03.975 Atomic Write Unit (Normal): 1 00:20:03.975 Atomic Write Unit (PFail): 1 00:20:03.975 Atomic Compare & Write Unit: 1 00:20:03.975 Fused Compare & Write: Supported 00:20:03.975 Scatter-Gather List 00:20:03.976 SGL Command Set: Supported 00:20:03.976 SGL Keyed: Supported 00:20:03.976 SGL Bit Bucket Descriptor: Not Supported 00:20:03.976 SGL Metadata Pointer: Not Supported 00:20:03.976 Oversized SGL: Not Supported 00:20:03.976 SGL Metadata Address: Not Supported 00:20:03.976 SGL Offset: Supported 00:20:03.976 Transport SGL Data Block: Not Supported 00:20:03.976 Replay Protected Memory Block: Not Supported 00:20:03.976 00:20:03.976 Firmware Slot Information 00:20:03.976 ========================= 00:20:03.976 Active slot: 1 00:20:03.976 Slot 1 Firmware Revision: 24.01.1 00:20:03.976 00:20:03.976 00:20:03.976 Commands Supported and Effects 00:20:03.976 ============================== 00:20:03.976 Admin Commands 00:20:03.976 -------------- 00:20:03.976 Get Log Page (02h): Supported 00:20:03.976 Identify (06h): Supported 00:20:03.976 Abort (08h): Supported 00:20:03.976 Set Features (09h): Supported 00:20:03.976 Get Features (0Ah): Supported 00:20:03.976 Asynchronous Event Request (0Ch): Supported 00:20:03.976 Keep Alive (18h): Supported 00:20:03.976 I/O Commands 00:20:03.976 ------------ 00:20:03.976 Flush (00h): Supported LBA-Change 00:20:03.976 Write (01h): Supported LBA-Change 00:20:03.976 Read (02h): Supported 00:20:03.976 Compare (05h): Supported 00:20:03.976 Write Zeroes (08h): Supported LBA-Change 00:20:03.976 Dataset Management (09h): Supported LBA-Change 00:20:03.976 Copy (19h): Supported LBA-Change 00:20:03.976 Unknown (79h): Supported LBA-Change 00:20:03.976 Unknown (7Ah): Supported 00:20:03.976 00:20:03.976 Error Log 00:20:03.976 ========= 00:20:03.976 00:20:03.976 Arbitration 00:20:03.976 =========== 00:20:03.976 Arbitration Burst: 1 00:20:03.976 00:20:03.976 Power Management 00:20:03.976 ================ 00:20:03.976 Number of Power States: 1 00:20:03.976 Current Power State: Power State #0 00:20:03.976 Power State #0: 00:20:03.976 Max Power: 0.00 W 00:20:03.976 Non-Operational State: Operational 00:20:03.976 Entry Latency: Not Reported 00:20:03.976 Exit Latency: Not Reported 00:20:03.976 Relative Read Throughput: 0 00:20:03.976 Relative Read Latency: 0 00:20:03.976 Relative Write Throughput: 0 00:20:03.976 Relative Write Latency: 0 00:20:03.976 Idle Power: Not Reported 00:20:03.976 Active Power: Not Reported 00:20:03.976 Non-Operational Permissive Mode: Not Supported 00:20:03.976 00:20:03.976 Health Information 00:20:03.976 ================== 00:20:03.976 Critical Warnings: 00:20:03.976 Available Spare Space: OK 00:20:03.976 Temperature: OK 00:20:03.976 Device Reliability: OK 00:20:03.976 Read Only: No 00:20:03.976 Volatile Memory Backup: OK 00:20:03.976 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:03.976 Temperature Threshold: [2024-11-18 15:06:27.512249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x818240) on tqpair=0x7cb510 00:20:03.976 [2024-11-18 15:06:27.512389] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512399] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512403] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7cb510) 00:20:03.976 [2024-11-18 15:06:27.512411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.976 [2024-11-18 15:06:27.512437] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x818240, cid 7, qid 0 00:20:03.976 [2024-11-18 15:06:27.512508] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.976 [2024-11-18 15:06:27.512514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.976 [2024-11-18 15:06:27.512519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x818240) on tqpair=0x7cb510 00:20:03.976 [2024-11-18 15:06:27.512570] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:03.976 [2024-11-18 15:06:27.512586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.976 [2024-11-18 15:06:27.512593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.976 [2024-11-18 15:06:27.512599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.976 [2024-11-18 15:06:27.512605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.976 [2024-11-18 15:06:27.512614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512618] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb510) 00:20:03.976 [2024-11-18 15:06:27.512629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.976 [2024-11-18 15:06:27.512653] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817cc0, cid 3, qid 0 00:20:03.976 [2024-11-18 15:06:27.512718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.976 [2024-11-18 15:06:27.512730] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.976 [2024-11-18 15:06:27.512736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817cc0) on tqpair=0x7cb510 00:20:03.976 [2024-11-18 15:06:27.512754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512760] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb510) 00:20:03.976 [2024-11-18 15:06:27.512776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.976 [2024-11-18 15:06:27.512811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817cc0, cid 3, qid 0 00:20:03.976 [2024-11-18 15:06:27.512877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.976 [2024-11-18 15:06:27.512885] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.976 [2024-11-18 15:06:27.512889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817cc0) on tqpair=0x7cb510 00:20:03.976 [2024-11-18 15:06:27.512897] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:03.976 [2024-11-18 15:06:27.512902] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:03.976 [2024-11-18 15:06:27.512912] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.512924] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb510) 00:20:03.976 [2024-11-18 15:06:27.512933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.976 [2024-11-18 15:06:27.512990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817cc0, cid 3, qid 0 00:20:03.976 [2024-11-18 15:06:27.513043] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.976 [2024-11-18 15:06:27.513050] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.976 [2024-11-18 15:06:27.513053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.513057] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817cc0) on tqpair=0x7cb510 00:20:03.976 [2024-11-18 15:06:27.513068] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.513073] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.513077] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb510) 00:20:03.976 [2024-11-18 15:06:27.513084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.976 [2024-11-18 15:06:27.513110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817cc0, cid 3, qid 0 00:20:03.976 [2024-11-18 15:06:27.513165] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.976 [2024-11-18 15:06:27.513176] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.976 [2024-11-18 15:06:27.513180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.513184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817cc0) on tqpair=0x7cb510 00:20:03.976 [2024-11-18 15:06:27.513195] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.513200] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.513203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb510) 00:20:03.976 [2024-11-18 15:06:27.513210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.976 [2024-11-18 15:06:27.513233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817cc0, cid 3, qid 0 00:20:03.976 [2024-11-18 15:06:27.513292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.976 [2024-11-18 15:06:27.513301] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.976 [2024-11-18 15:06:27.513305] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.513308] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817cc0) on tqpair=0x7cb510 00:20:03.976 [2024-11-18 15:06:27.513319] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.513335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:03.976 [2024-11-18 15:06:27.517392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb510) 00:20:03.976 [2024-11-18 15:06:27.517419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.976 [2024-11-18 15:06:27.517448] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x817cc0, cid 3, qid 0 00:20:03.976 [2024-11-18 15:06:27.517510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:03.976 [2024-11-18 15:06:27.517517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:03.976 [2024-11-18 15:06:27.517521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:03.977 [2024-11-18 15:06:27.517524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x817cc0) on tqpair=0x7cb510 00:20:03.977 [2024-11-18 15:06:27.517533] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:03.977 0 Kelvin (-273 Celsius) 00:20:03.977 Available Spare: 0% 00:20:03.977 Available Spare Threshold: 0% 00:20:03.977 Life Percentage Used: 0% 00:20:03.977 Data Units Read: 0 00:20:03.977 Data Units Written: 0 00:20:03.977 Host Read Commands: 0 00:20:03.977 Host Write Commands: 0 00:20:03.977 Controller Busy Time: 0 minutes 00:20:03.977 Power Cycles: 0 00:20:03.977 Power On Hours: 0 hours 00:20:03.977 Unsafe Shutdowns: 0 00:20:03.977 Unrecoverable Media Errors: 0 00:20:03.977 Lifetime Error Log Entries: 0 00:20:03.977 Warning Temperature Time: 0 minutes 00:20:03.977 Critical Temperature Time: 0 minutes 00:20:03.977 00:20:03.977 Number of Queues 00:20:03.977 ================ 00:20:03.977 Number of I/O Submission Queues: 127 00:20:03.977 Number of I/O Completion Queues: 127 00:20:03.977 00:20:03.977 Active Namespaces 00:20:03.977 ================= 00:20:03.977 Namespace ID:1 00:20:03.977 Error Recovery Timeout: Unlimited 00:20:03.977 Command Set Identifier: NVM (00h) 00:20:03.977 Deallocate: Supported 00:20:03.977 Deallocated/Unwritten Error: Not Supported 00:20:03.977 Deallocated Read Value: Unknown 00:20:03.977 Deallocate in Write Zeroes: Not Supported 00:20:03.977 Deallocated Guard Field: 0xFFFF 00:20:03.977 Flush: Supported 00:20:03.977 Reservation: Supported 00:20:03.977 Namespace Sharing Capabilities: Multiple Controllers 00:20:03.977 Size (in LBAs): 131072 (0GiB) 00:20:03.977 Capacity (in LBAs): 131072 (0GiB) 00:20:03.977 Utilization (in LBAs): 131072 (0GiB) 00:20:03.977 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:03.977 EUI64: ABCDEF0123456789 00:20:03.977 UUID: 204088fe-6b16-40ea-8d0b-847829decb34 00:20:03.977 Thin Provisioning: Not Supported 00:20:03.977 Per-NS Atomic Units: Yes 00:20:03.977 Atomic Boundary Size (Normal): 0 00:20:03.977 Atomic Boundary Size (PFail): 0 00:20:03.977 Atomic Boundary Offset: 0 00:20:03.977 Maximum Single Source Range Length: 65535 00:20:03.977 Maximum Copy Length: 65535 00:20:03.977 Maximum Source Range Count: 1 00:20:03.977 NGUID/EUI64 Never Reused: No 00:20:03.977 Namespace Write Protected: No 00:20:03.977 Number of LBA Formats: 1 00:20:03.977 Current LBA Format: LBA Format #00 00:20:03.977 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:03.977 00:20:03.977 15:06:27 -- host/identify.sh@51 -- # sync 00:20:04.236 15:06:27 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.236 15:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.236 15:06:27 -- common/autotest_common.sh@10 -- # set +x 00:20:04.236 15:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.236 15:06:27 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:04.236 15:06:27 -- host/identify.sh@56 -- # nvmftestfini 00:20:04.236 15:06:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:04.236 15:06:27 -- nvmf/common.sh@116 -- # sync 00:20:04.236 15:06:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:04.236 15:06:27 -- nvmf/common.sh@119 -- # set +e 00:20:04.236 15:06:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:04.237 15:06:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:04.237 rmmod nvme_tcp 00:20:04.237 rmmod nvme_fabrics 00:20:04.237 rmmod nvme_keyring 00:20:04.237 15:06:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:04.237 15:06:27 -- nvmf/common.sh@123 -- # set -e 00:20:04.237 15:06:27 -- nvmf/common.sh@124 -- # return 0 00:20:04.237 15:06:27 -- nvmf/common.sh@477 -- # '[' -n 93626 ']' 00:20:04.237 15:06:27 -- nvmf/common.sh@478 -- # killprocess 93626 00:20:04.237 15:06:27 -- common/autotest_common.sh@936 -- # '[' -z 93626 ']' 00:20:04.237 15:06:27 -- common/autotest_common.sh@940 -- # kill -0 93626 00:20:04.237 15:06:27 -- common/autotest_common.sh@941 -- # uname 00:20:04.237 15:06:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.237 15:06:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93626 00:20:04.237 15:06:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:04.237 15:06:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:04.237 killing process with pid 93626 00:20:04.237 15:06:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93626' 00:20:04.237 15:06:27 -- common/autotest_common.sh@955 -- # kill 93626 00:20:04.237 [2024-11-18 15:06:27.679342] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:04.237 15:06:27 -- common/autotest_common.sh@960 -- # wait 93626 00:20:04.496 15:06:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:04.496 15:06:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:04.496 15:06:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:04.496 15:06:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.496 15:06:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:04.496 15:06:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.496 15:06:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.496 15:06:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.496 15:06:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:04.496 00:20:04.496 real 0m2.675s 00:20:04.496 user 0m7.672s 00:20:04.496 sys 0m0.707s 00:20:04.496 15:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:04.496 15:06:27 -- common/autotest_common.sh@10 -- # set +x 00:20:04.496 ************************************ 00:20:04.496 END TEST nvmf_identify 00:20:04.496 ************************************ 00:20:04.496 15:06:27 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:04.496 15:06:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:04.496 15:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:04.496 15:06:27 -- common/autotest_common.sh@10 -- # set +x 00:20:04.496 ************************************ 00:20:04.496 START TEST nvmf_perf 00:20:04.496 ************************************ 00:20:04.496 15:06:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:04.496 * Looking for test storage... 00:20:04.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:04.496 15:06:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:04.496 15:06:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:04.496 15:06:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:04.755 15:06:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:04.755 15:06:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:04.755 15:06:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:04.755 15:06:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:04.755 15:06:28 -- scripts/common.sh@335 -- # IFS=.-: 00:20:04.755 15:06:28 -- scripts/common.sh@335 -- # read -ra ver1 00:20:04.755 15:06:28 -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.755 15:06:28 -- scripts/common.sh@336 -- # read -ra ver2 00:20:04.755 15:06:28 -- scripts/common.sh@337 -- # local 'op=<' 00:20:04.755 15:06:28 -- scripts/common.sh@339 -- # ver1_l=2 00:20:04.755 15:06:28 -- scripts/common.sh@340 -- # ver2_l=1 00:20:04.755 15:06:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:04.755 15:06:28 -- scripts/common.sh@343 -- # case "$op" in 00:20:04.755 15:06:28 -- scripts/common.sh@344 -- # : 1 00:20:04.755 15:06:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:04.755 15:06:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.755 15:06:28 -- scripts/common.sh@364 -- # decimal 1 00:20:04.755 15:06:28 -- scripts/common.sh@352 -- # local d=1 00:20:04.755 15:06:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.755 15:06:28 -- scripts/common.sh@354 -- # echo 1 00:20:04.755 15:06:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:04.755 15:06:28 -- scripts/common.sh@365 -- # decimal 2 00:20:04.755 15:06:28 -- scripts/common.sh@352 -- # local d=2 00:20:04.755 15:06:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.755 15:06:28 -- scripts/common.sh@354 -- # echo 2 00:20:04.755 15:06:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:04.755 15:06:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:04.755 15:06:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:04.755 15:06:28 -- scripts/common.sh@367 -- # return 0 00:20:04.755 15:06:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.755 15:06:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:04.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.755 --rc genhtml_branch_coverage=1 00:20:04.755 --rc genhtml_function_coverage=1 00:20:04.755 --rc genhtml_legend=1 00:20:04.755 --rc geninfo_all_blocks=1 00:20:04.755 --rc geninfo_unexecuted_blocks=1 00:20:04.755 00:20:04.755 ' 00:20:04.755 15:06:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:04.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.756 --rc genhtml_branch_coverage=1 00:20:04.756 --rc genhtml_function_coverage=1 00:20:04.756 --rc genhtml_legend=1 00:20:04.756 --rc geninfo_all_blocks=1 00:20:04.756 --rc geninfo_unexecuted_blocks=1 00:20:04.756 00:20:04.756 ' 00:20:04.756 15:06:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:04.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.756 --rc genhtml_branch_coverage=1 00:20:04.756 --rc genhtml_function_coverage=1 00:20:04.756 --rc genhtml_legend=1 00:20:04.756 --rc geninfo_all_blocks=1 00:20:04.756 --rc geninfo_unexecuted_blocks=1 00:20:04.756 00:20:04.756 ' 00:20:04.756 15:06:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:04.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.756 --rc genhtml_branch_coverage=1 00:20:04.756 --rc genhtml_function_coverage=1 00:20:04.756 --rc genhtml_legend=1 00:20:04.756 --rc geninfo_all_blocks=1 00:20:04.756 --rc geninfo_unexecuted_blocks=1 00:20:04.756 00:20:04.756 ' 00:20:04.756 15:06:28 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.756 15:06:28 -- nvmf/common.sh@7 -- # uname -s 00:20:04.756 15:06:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.756 15:06:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.756 15:06:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.756 15:06:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.756 15:06:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.756 15:06:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.756 15:06:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.756 15:06:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.756 15:06:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.756 15:06:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.756 15:06:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:20:04.756 15:06:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:20:04.756 15:06:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.756 15:06:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.756 15:06:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.756 15:06:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.756 15:06:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.756 15:06:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.756 15:06:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.756 15:06:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.756 15:06:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.756 15:06:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.756 15:06:28 -- paths/export.sh@5 -- # export PATH 00:20:04.756 15:06:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.756 15:06:28 -- nvmf/common.sh@46 -- # : 0 00:20:04.756 15:06:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:04.756 15:06:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:04.756 15:06:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:04.756 15:06:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.756 15:06:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.756 15:06:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:04.756 15:06:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:04.756 15:06:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:04.756 15:06:28 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:04.756 15:06:28 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:04.756 15:06:28 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:04.756 15:06:28 -- host/perf.sh@17 -- # nvmftestinit 00:20:04.756 15:06:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:04.756 15:06:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.756 15:06:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:04.756 15:06:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:04.756 15:06:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:04.756 15:06:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.756 15:06:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.756 15:06:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.756 15:06:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:04.756 15:06:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:04.756 15:06:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:04.756 15:06:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:04.756 15:06:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:04.756 15:06:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:04.756 15:06:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.756 15:06:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.756 15:06:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:04.756 15:06:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:04.756 15:06:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.756 15:06:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.756 15:06:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.756 15:06:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.756 15:06:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.756 15:06:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.756 15:06:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.756 15:06:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.756 15:06:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:04.756 15:06:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:04.756 Cannot find device "nvmf_tgt_br" 00:20:04.756 15:06:28 -- nvmf/common.sh@154 -- # true 00:20:04.756 15:06:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.756 Cannot find device "nvmf_tgt_br2" 00:20:04.756 15:06:28 -- nvmf/common.sh@155 -- # true 00:20:04.756 15:06:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:04.756 15:06:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:04.756 Cannot find device "nvmf_tgt_br" 00:20:04.756 15:06:28 -- nvmf/common.sh@157 -- # true 00:20:04.756 15:06:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:04.756 Cannot find device "nvmf_tgt_br2" 00:20:04.756 15:06:28 -- nvmf/common.sh@158 -- # true 00:20:04.756 15:06:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:04.756 15:06:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:04.756 15:06:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.756 15:06:28 -- nvmf/common.sh@161 -- # true 00:20:04.756 15:06:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.756 15:06:28 -- nvmf/common.sh@162 -- # true 00:20:04.756 15:06:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.756 15:06:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.756 15:06:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.756 15:06:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.756 15:06:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:05.015 15:06:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:05.015 15:06:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:05.015 15:06:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:05.015 15:06:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:05.015 15:06:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:05.015 15:06:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:05.015 15:06:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:05.015 15:06:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:05.015 15:06:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:05.015 15:06:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:05.015 15:06:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:05.015 15:06:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:05.015 15:06:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:05.015 15:06:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:05.015 15:06:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:05.015 15:06:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:05.015 15:06:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:05.015 15:06:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:05.015 15:06:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:05.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:20:05.015 00:20:05.015 --- 10.0.0.2 ping statistics --- 00:20:05.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.015 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:05.015 15:06:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:05.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:05.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:20:05.015 00:20:05.016 --- 10.0.0.3 ping statistics --- 00:20:05.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.016 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:05.016 15:06:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:05.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:05.016 00:20:05.016 --- 10.0.0.1 ping statistics --- 00:20:05.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.016 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:05.016 15:06:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.016 15:06:28 -- nvmf/common.sh@421 -- # return 0 00:20:05.016 15:06:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:05.016 15:06:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.016 15:06:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:05.016 15:06:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:05.016 15:06:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.016 15:06:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:05.016 15:06:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:05.016 15:06:28 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:05.016 15:06:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:05.016 15:06:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:05.016 15:06:28 -- common/autotest_common.sh@10 -- # set +x 00:20:05.016 15:06:28 -- nvmf/common.sh@469 -- # nvmfpid=93855 00:20:05.016 15:06:28 -- nvmf/common.sh@470 -- # waitforlisten 93855 00:20:05.016 15:06:28 -- common/autotest_common.sh@829 -- # '[' -z 93855 ']' 00:20:05.016 15:06:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.016 15:06:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:05.016 15:06:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.016 15:06:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.016 15:06:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.016 15:06:28 -- common/autotest_common.sh@10 -- # set +x 00:20:05.016 [2024-11-18 15:06:28.569085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:05.016 [2024-11-18 15:06:28.569264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.275 [2024-11-18 15:06:28.708759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.275 [2024-11-18 15:06:28.774274] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:05.275 [2024-11-18 15:06:28.774472] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.275 [2024-11-18 15:06:28.774485] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.275 [2024-11-18 15:06:28.774493] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.275 [2024-11-18 15:06:28.774658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.275 [2024-11-18 15:06:28.774772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.275 [2024-11-18 15:06:28.775198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.275 [2024-11-18 15:06:28.775244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.211 15:06:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.211 15:06:29 -- common/autotest_common.sh@862 -- # return 0 00:20:06.211 15:06:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:06.211 15:06:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.211 15:06:29 -- common/autotest_common.sh@10 -- # set +x 00:20:06.211 15:06:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.211 15:06:29 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:06.211 15:06:29 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:06.470 15:06:30 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:06.470 15:06:30 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:07.114 15:06:30 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:07.114 15:06:30 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:07.114 15:06:30 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:07.114 15:06:30 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:07.114 15:06:30 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:07.114 15:06:30 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:07.114 15:06:30 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.373 [2024-11-18 15:06:30.756234] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.373 15:06:30 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.631 15:06:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:07.631 15:06:30 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.631 15:06:31 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:07.631 15:06:31 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:07.890 15:06:31 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.150 [2024-11-18 15:06:31.581886] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.150 15:06:31 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:08.409 15:06:31 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:08.409 15:06:31 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:08.409 15:06:31 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:08.409 15:06:31 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:09.345 Initializing NVMe Controllers 00:20:09.345 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:09.345 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:09.345 Initialization complete. Launching workers. 00:20:09.345 ======================================================== 00:20:09.345 Latency(us) 00:20:09.345 Device Information : IOPS MiB/s Average min max 00:20:09.345 PCIE (0000:00:06.0) NSID 1 from core 0: 20631.19 80.59 1550.51 381.05 10263.78 00:20:09.345 ======================================================== 00:20:09.345 Total : 20631.19 80.59 1550.51 381.05 10263.78 00:20:09.345 00:20:09.345 15:06:32 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.722 Initializing NVMe Controllers 00:20:10.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:10.722 Initialization complete. Launching workers. 00:20:10.722 ======================================================== 00:20:10.722 Latency(us) 00:20:10.722 Device Information : IOPS MiB/s Average min max 00:20:10.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3446.00 13.46 289.95 102.62 4251.77 00:20:10.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8055.62 6007.34 12040.32 00:20:10.722 ======================================================== 00:20:10.722 Total : 3571.00 13.95 561.78 102.62 12040.32 00:20:10.722 00:20:10.722 15:06:34 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:12.099 [2024-11-18 15:06:35.546344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b96a0 is same with the state(5) to be set 00:20:12.099 [2024-11-18 15:06:35.546404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b96a0 is same with the state(5) to be set 00:20:12.099 [2024-11-18 15:06:35.546438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b96a0 is same with the state(5) to be set 00:20:12.099 [2024-11-18 15:06:35.546450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b96a0 is same with the state(5) to be set 00:20:12.099 Initializing NVMe Controllers 00:20:12.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:12.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:12.099 Initialization complete. Launching workers. 00:20:12.099 ======================================================== 00:20:12.099 Latency(us) 00:20:12.099 Device Information : IOPS MiB/s Average min max 00:20:12.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9948.57 38.86 3217.58 657.29 10470.62 00:20:12.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2680.45 10.47 12039.73 5878.81 21927.83 00:20:12.099 ======================================================== 00:20:12.099 Total : 12629.03 49.33 5090.04 657.29 21927.83 00:20:12.099 00:20:12.099 15:06:35 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:12.100 15:06:35 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:14.634 Initializing NVMe Controllers 00:20:14.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.634 Controller IO queue size 128, less than required. 00:20:14.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.634 Controller IO queue size 128, less than required. 00:20:14.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:14.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:14.634 Initialization complete. Launching workers. 00:20:14.634 ======================================================== 00:20:14.634 Latency(us) 00:20:14.634 Device Information : IOPS MiB/s Average min max 00:20:14.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1652.03 413.01 78684.30 48619.24 151997.25 00:20:14.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 499.90 124.98 264460.01 109903.57 547460.81 00:20:14.634 ======================================================== 00:20:14.634 Total : 2151.93 537.98 121840.83 48619.24 547460.81 00:20:14.634 00:20:14.634 15:06:38 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:14.893 No valid NVMe controllers or AIO or URING devices found 00:20:14.893 Initializing NVMe Controllers 00:20:14.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.893 Controller IO queue size 128, less than required. 00:20:14.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.893 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:14.893 Controller IO queue size 128, less than required. 00:20:14.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:14.893 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:14.893 WARNING: Some requested NVMe devices were skipped 00:20:14.893 15:06:38 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:17.426 Initializing NVMe Controllers 00:20:17.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.426 Controller IO queue size 128, less than required. 00:20:17.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:17.426 Controller IO queue size 128, less than required. 00:20:17.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:17.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:17.426 Initialization complete. Launching workers. 00:20:17.426 00:20:17.426 ==================== 00:20:17.426 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:17.426 TCP transport: 00:20:17.426 polls: 8551 00:20:17.426 idle_polls: 5664 00:20:17.426 sock_completions: 2887 00:20:17.426 nvme_completions: 4163 00:20:17.426 submitted_requests: 6367 00:20:17.426 queued_requests: 1 00:20:17.426 00:20:17.426 ==================== 00:20:17.426 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:17.426 TCP transport: 00:20:17.426 polls: 8730 00:20:17.426 idle_polls: 5847 00:20:17.426 sock_completions: 2883 00:20:17.426 nvme_completions: 5694 00:20:17.426 submitted_requests: 8686 00:20:17.426 queued_requests: 1 00:20:17.426 ======================================================== 00:20:17.426 Latency(us) 00:20:17.426 Device Information : IOPS MiB/s Average min max 00:20:17.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1103.73 275.93 119866.94 78655.30 231506.77 00:20:17.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1484.47 371.12 87079.53 44443.85 136528.87 00:20:17.426 ======================================================== 00:20:17.426 Total : 2588.20 647.05 101061.65 44443.85 231506.77 00:20:17.426 00:20:17.426 15:06:40 -- host/perf.sh@66 -- # sync 00:20:17.426 15:06:40 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.993 15:06:41 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:17.993 15:06:41 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:17.993 15:06:41 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:17.993 15:06:41 -- host/perf.sh@72 -- # ls_guid=83ab04b4-ab1b-4e36-bbfc-31171e3ae7d2 00:20:17.993 15:06:41 -- host/perf.sh@73 -- # get_lvs_free_mb 83ab04b4-ab1b-4e36-bbfc-31171e3ae7d2 00:20:17.993 15:06:41 -- common/autotest_common.sh@1353 -- # local lvs_uuid=83ab04b4-ab1b-4e36-bbfc-31171e3ae7d2 00:20:17.993 15:06:41 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:17.993 15:06:41 -- common/autotest_common.sh@1355 -- # local fc 00:20:17.993 15:06:41 -- common/autotest_common.sh@1356 -- # local cs 00:20:17.993 15:06:41 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:18.561 15:06:41 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:18.561 { 00:20:18.561 "base_bdev": "Nvme0n1", 00:20:18.561 "block_size": 4096, 00:20:18.561 "cluster_size": 4194304, 00:20:18.561 "free_clusters": 1278, 00:20:18.561 "name": "lvs_0", 00:20:18.561 "total_data_clusters": 1278, 00:20:18.561 "uuid": "83ab04b4-ab1b-4e36-bbfc-31171e3ae7d2" 00:20:18.561 } 00:20:18.561 ]' 00:20:18.561 15:06:41 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="83ab04b4-ab1b-4e36-bbfc-31171e3ae7d2") .free_clusters' 00:20:18.561 15:06:41 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:18.561 15:06:41 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="83ab04b4-ab1b-4e36-bbfc-31171e3ae7d2") .cluster_size' 00:20:18.561 5112 00:20:18.561 15:06:41 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:18.561 15:06:41 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:18.561 15:06:41 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:18.561 15:06:41 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:18.561 15:06:41 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 83ab04b4-ab1b-4e36-bbfc-31171e3ae7d2 lbd_0 5112 00:20:18.561 15:06:42 -- host/perf.sh@80 -- # lb_guid=4775be73-2cf4-4830-811b-b46433fcc5b8 00:20:18.561 15:06:42 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4775be73-2cf4-4830-811b-b46433fcc5b8 lvs_n_0 00:20:19.128 15:06:42 -- host/perf.sh@83 -- # ls_nested_guid=1c292827-ef93-42c9-8579-1061d67e53b9 00:20:19.128 15:06:42 -- host/perf.sh@84 -- # get_lvs_free_mb 1c292827-ef93-42c9-8579-1061d67e53b9 00:20:19.128 15:06:42 -- common/autotest_common.sh@1353 -- # local lvs_uuid=1c292827-ef93-42c9-8579-1061d67e53b9 00:20:19.128 15:06:42 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:19.128 15:06:42 -- common/autotest_common.sh@1355 -- # local fc 00:20:19.128 15:06:42 -- common/autotest_common.sh@1356 -- # local cs 00:20:19.128 15:06:42 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:19.387 15:06:42 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:19.387 { 00:20:19.387 "base_bdev": "Nvme0n1", 00:20:19.387 "block_size": 4096, 00:20:19.387 "cluster_size": 4194304, 00:20:19.387 "free_clusters": 0, 00:20:19.387 "name": "lvs_0", 00:20:19.387 "total_data_clusters": 1278, 00:20:19.387 "uuid": "83ab04b4-ab1b-4e36-bbfc-31171e3ae7d2" 00:20:19.387 }, 00:20:19.387 { 00:20:19.387 "base_bdev": "4775be73-2cf4-4830-811b-b46433fcc5b8", 00:20:19.387 "block_size": 4096, 00:20:19.387 "cluster_size": 4194304, 00:20:19.387 "free_clusters": 1276, 00:20:19.387 "name": "lvs_n_0", 00:20:19.387 "total_data_clusters": 1276, 00:20:19.387 "uuid": "1c292827-ef93-42c9-8579-1061d67e53b9" 00:20:19.387 } 00:20:19.387 ]' 00:20:19.387 15:06:42 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="1c292827-ef93-42c9-8579-1061d67e53b9") .free_clusters' 00:20:19.387 15:06:42 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:19.387 15:06:42 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="1c292827-ef93-42c9-8579-1061d67e53b9") .cluster_size' 00:20:19.387 5104 00:20:19.387 15:06:42 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:19.387 15:06:42 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:19.387 15:06:42 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:19.387 15:06:42 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:19.387 15:06:42 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1c292827-ef93-42c9-8579-1061d67e53b9 lbd_nest_0 5104 00:20:19.646 15:06:43 -- host/perf.sh@88 -- # lb_nested_guid=9387cea7-c371-47e2-97ca-539acbcf9fdb 00:20:19.646 15:06:43 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.905 15:06:43 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:19.905 15:06:43 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9387cea7-c371-47e2-97ca-539acbcf9fdb 00:20:20.163 15:06:43 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.163 15:06:43 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:20.163 15:06:43 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:20.163 15:06:43 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:20.163 15:06:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:20.163 15:06:43 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.422 No valid NVMe controllers or AIO or URING devices found 00:20:20.681 Initializing NVMe Controllers 00:20:20.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.681 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:20.681 WARNING: Some requested NVMe devices were skipped 00:20:20.681 15:06:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:20.681 15:06:44 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.894 Initializing NVMe Controllers 00:20:32.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.894 Initialization complete. Launching workers. 00:20:32.894 ======================================================== 00:20:32.894 Latency(us) 00:20:32.894 Device Information : IOPS MiB/s Average min max 00:20:32.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 856.00 107.00 1167.39 391.20 8418.94 00:20:32.894 ======================================================== 00:20:32.894 Total : 856.00 107.00 1167.39 391.20 8418.94 00:20:32.894 00:20:32.894 15:06:54 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:32.894 15:06:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:32.894 15:06:54 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.894 No valid NVMe controllers or AIO or URING devices found 00:20:32.894 Initializing NVMe Controllers 00:20:32.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.894 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:32.894 WARNING: Some requested NVMe devices were skipped 00:20:32.894 15:06:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:32.894 15:06:54 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.871 Initializing NVMe Controllers 00:20:42.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:42.871 Initialization complete. Launching workers. 00:20:42.871 ======================================================== 00:20:42.871 Latency(us) 00:20:42.871 Device Information : IOPS MiB/s Average min max 00:20:42.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1127.80 140.97 28407.49 7130.87 235994.25 00:20:42.871 ======================================================== 00:20:42.871 Total : 1127.80 140.97 28407.49 7130.87 235994.25 00:20:42.871 00:20:42.871 15:07:04 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:42.871 15:07:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:42.871 15:07:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.871 No valid NVMe controllers or AIO or URING devices found 00:20:42.871 Initializing NVMe Controllers 00:20:42.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.871 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:42.871 WARNING: Some requested NVMe devices were skipped 00:20:42.871 15:07:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:42.871 15:07:05 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.852 Initializing NVMe Controllers 00:20:52.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.852 Controller IO queue size 128, less than required. 00:20:52.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:52.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.852 Initialization complete. Launching workers. 00:20:52.852 ======================================================== 00:20:52.852 Latency(us) 00:20:52.852 Device Information : IOPS MiB/s Average min max 00:20:52.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3877.36 484.67 33043.37 13487.21 66781.08 00:20:52.852 ======================================================== 00:20:52.852 Total : 3877.36 484.67 33043.37 13487.21 66781.08 00:20:52.852 00:20:52.852 15:07:15 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.852 15:07:15 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9387cea7-c371-47e2-97ca-539acbcf9fdb 00:20:52.852 15:07:16 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:53.112 15:07:16 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4775be73-2cf4-4830-811b-b46433fcc5b8 00:20:53.112 15:07:16 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:53.372 15:07:16 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:53.372 15:07:16 -- host/perf.sh@114 -- # nvmftestfini 00:20:53.372 15:07:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:53.372 15:07:16 -- nvmf/common.sh@116 -- # sync 00:20:53.372 15:07:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:53.372 15:07:16 -- nvmf/common.sh@119 -- # set +e 00:20:53.372 15:07:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:53.372 15:07:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:53.372 rmmod nvme_tcp 00:20:53.631 rmmod nvme_fabrics 00:20:53.631 rmmod nvme_keyring 00:20:53.631 15:07:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:53.631 15:07:17 -- nvmf/common.sh@123 -- # set -e 00:20:53.631 15:07:17 -- nvmf/common.sh@124 -- # return 0 00:20:53.631 15:07:17 -- nvmf/common.sh@477 -- # '[' -n 93855 ']' 00:20:53.631 15:07:17 -- nvmf/common.sh@478 -- # killprocess 93855 00:20:53.631 15:07:17 -- common/autotest_common.sh@936 -- # '[' -z 93855 ']' 00:20:53.631 15:07:17 -- common/autotest_common.sh@940 -- # kill -0 93855 00:20:53.631 15:07:17 -- common/autotest_common.sh@941 -- # uname 00:20:53.631 15:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:53.631 15:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93855 00:20:53.631 15:07:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:53.631 killing process with pid 93855 00:20:53.631 15:07:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:53.631 15:07:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93855' 00:20:53.631 15:07:17 -- common/autotest_common.sh@955 -- # kill 93855 00:20:53.631 15:07:17 -- common/autotest_common.sh@960 -- # wait 93855 00:20:55.014 15:07:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:55.014 15:07:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:55.014 15:07:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:55.014 15:07:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.014 15:07:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:55.014 15:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.014 15:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.014 15:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.014 15:07:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:55.014 00:20:55.014 real 0m50.456s 00:20:55.014 user 3m9.595s 00:20:55.014 sys 0m10.314s 00:20:55.014 15:07:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:55.014 ************************************ 00:20:55.014 END TEST nvmf_perf 00:20:55.014 ************************************ 00:20:55.014 15:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:55.014 15:07:18 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:55.014 15:07:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:55.014 15:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.014 15:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:55.014 ************************************ 00:20:55.014 START TEST nvmf_fio_host 00:20:55.014 ************************************ 00:20:55.014 15:07:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:55.014 * Looking for test storage... 00:20:55.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:55.014 15:07:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:55.014 15:07:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:55.014 15:07:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:55.276 15:07:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:55.276 15:07:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:55.276 15:07:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:55.276 15:07:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:55.276 15:07:18 -- scripts/common.sh@335 -- # IFS=.-: 00:20:55.276 15:07:18 -- scripts/common.sh@335 -- # read -ra ver1 00:20:55.276 15:07:18 -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.276 15:07:18 -- scripts/common.sh@336 -- # read -ra ver2 00:20:55.276 15:07:18 -- scripts/common.sh@337 -- # local 'op=<' 00:20:55.276 15:07:18 -- scripts/common.sh@339 -- # ver1_l=2 00:20:55.276 15:07:18 -- scripts/common.sh@340 -- # ver2_l=1 00:20:55.276 15:07:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:55.276 15:07:18 -- scripts/common.sh@343 -- # case "$op" in 00:20:55.276 15:07:18 -- scripts/common.sh@344 -- # : 1 00:20:55.276 15:07:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:55.276 15:07:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.276 15:07:18 -- scripts/common.sh@364 -- # decimal 1 00:20:55.276 15:07:18 -- scripts/common.sh@352 -- # local d=1 00:20:55.276 15:07:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.276 15:07:18 -- scripts/common.sh@354 -- # echo 1 00:20:55.276 15:07:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:55.276 15:07:18 -- scripts/common.sh@365 -- # decimal 2 00:20:55.276 15:07:18 -- scripts/common.sh@352 -- # local d=2 00:20:55.276 15:07:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.276 15:07:18 -- scripts/common.sh@354 -- # echo 2 00:20:55.276 15:07:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:55.276 15:07:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:55.276 15:07:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:55.276 15:07:18 -- scripts/common.sh@367 -- # return 0 00:20:55.276 15:07:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.276 15:07:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:55.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.276 --rc genhtml_branch_coverage=1 00:20:55.276 --rc genhtml_function_coverage=1 00:20:55.276 --rc genhtml_legend=1 00:20:55.276 --rc geninfo_all_blocks=1 00:20:55.276 --rc geninfo_unexecuted_blocks=1 00:20:55.276 00:20:55.276 ' 00:20:55.276 15:07:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:55.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.276 --rc genhtml_branch_coverage=1 00:20:55.276 --rc genhtml_function_coverage=1 00:20:55.276 --rc genhtml_legend=1 00:20:55.276 --rc geninfo_all_blocks=1 00:20:55.276 --rc geninfo_unexecuted_blocks=1 00:20:55.276 00:20:55.276 ' 00:20:55.276 15:07:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:55.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.276 --rc genhtml_branch_coverage=1 00:20:55.276 --rc genhtml_function_coverage=1 00:20:55.276 --rc genhtml_legend=1 00:20:55.276 --rc geninfo_all_blocks=1 00:20:55.276 --rc geninfo_unexecuted_blocks=1 00:20:55.276 00:20:55.276 ' 00:20:55.276 15:07:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:55.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.276 --rc genhtml_branch_coverage=1 00:20:55.276 --rc genhtml_function_coverage=1 00:20:55.276 --rc genhtml_legend=1 00:20:55.276 --rc geninfo_all_blocks=1 00:20:55.276 --rc geninfo_unexecuted_blocks=1 00:20:55.276 00:20:55.276 ' 00:20:55.276 15:07:18 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.276 15:07:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.276 15:07:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.276 15:07:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.276 15:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.277 15:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.277 15:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.277 15:07:18 -- paths/export.sh@5 -- # export PATH 00:20:55.277 15:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.277 15:07:18 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.277 15:07:18 -- nvmf/common.sh@7 -- # uname -s 00:20:55.277 15:07:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.277 15:07:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.277 15:07:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.277 15:07:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.277 15:07:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.277 15:07:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.277 15:07:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.277 15:07:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.277 15:07:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.277 15:07:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.277 15:07:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:20:55.277 15:07:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:20:55.277 15:07:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.277 15:07:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.277 15:07:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.277 15:07:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.277 15:07:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.277 15:07:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.277 15:07:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.277 15:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.277 15:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.277 15:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.277 15:07:18 -- paths/export.sh@5 -- # export PATH 00:20:55.277 15:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.277 15:07:18 -- nvmf/common.sh@46 -- # : 0 00:20:55.277 15:07:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:55.277 15:07:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:55.277 15:07:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:55.277 15:07:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.277 15:07:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.277 15:07:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:55.277 15:07:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:55.277 15:07:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:55.277 15:07:18 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.277 15:07:18 -- host/fio.sh@14 -- # nvmftestinit 00:20:55.277 15:07:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:55.277 15:07:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.277 15:07:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:55.277 15:07:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:55.277 15:07:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:55.277 15:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.277 15:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.277 15:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.277 15:07:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:55.277 15:07:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:55.277 15:07:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:55.277 15:07:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:55.277 15:07:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:55.277 15:07:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:55.277 15:07:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.277 15:07:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.277 15:07:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:55.277 15:07:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:55.277 15:07:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.277 15:07:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.277 15:07:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.277 15:07:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.277 15:07:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.277 15:07:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.277 15:07:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.277 15:07:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.277 15:07:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:55.277 15:07:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:55.277 Cannot find device "nvmf_tgt_br" 00:20:55.277 15:07:18 -- nvmf/common.sh@154 -- # true 00:20:55.277 15:07:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.277 Cannot find device "nvmf_tgt_br2" 00:20:55.277 15:07:18 -- nvmf/common.sh@155 -- # true 00:20:55.277 15:07:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:55.277 15:07:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:55.277 Cannot find device "nvmf_tgt_br" 00:20:55.277 15:07:18 -- nvmf/common.sh@157 -- # true 00:20:55.277 15:07:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:55.277 Cannot find device "nvmf_tgt_br2" 00:20:55.277 15:07:18 -- nvmf/common.sh@158 -- # true 00:20:55.277 15:07:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:55.277 15:07:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:55.538 15:07:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.538 15:07:18 -- nvmf/common.sh@161 -- # true 00:20:55.538 15:07:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.538 15:07:18 -- nvmf/common.sh@162 -- # true 00:20:55.539 15:07:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.539 15:07:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.539 15:07:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.539 15:07:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.539 15:07:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.539 15:07:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.539 15:07:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.539 15:07:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:55.539 15:07:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:55.539 15:07:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:55.539 15:07:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:55.539 15:07:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:55.539 15:07:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:55.539 15:07:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.539 15:07:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.539 15:07:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.539 15:07:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:55.539 15:07:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:55.539 15:07:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.539 15:07:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.539 15:07:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.539 15:07:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.539 15:07:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.539 15:07:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:55.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:20:55.539 00:20:55.539 --- 10.0.0.2 ping statistics --- 00:20:55.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.539 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:55.539 15:07:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:55.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:55.539 00:20:55.539 --- 10.0.0.3 ping statistics --- 00:20:55.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.539 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:55.539 15:07:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:55.539 00:20:55.539 --- 10.0.0.1 ping statistics --- 00:20:55.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.539 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:55.539 15:07:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.539 15:07:19 -- nvmf/common.sh@421 -- # return 0 00:20:55.539 15:07:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:55.539 15:07:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.539 15:07:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:55.539 15:07:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:55.539 15:07:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.539 15:07:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:55.539 15:07:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:55.539 15:07:19 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:55.539 15:07:19 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:55.539 15:07:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.539 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:55.539 15:07:19 -- host/fio.sh@24 -- # nvmfpid=94831 00:20:55.539 15:07:19 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:55.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.539 15:07:19 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.539 15:07:19 -- host/fio.sh@28 -- # waitforlisten 94831 00:20:55.539 15:07:19 -- common/autotest_common.sh@829 -- # '[' -z 94831 ']' 00:20:55.539 15:07:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.539 15:07:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.539 15:07:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.539 15:07:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.539 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:55.803 [2024-11-18 15:07:19.158790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:55.803 [2024-11-18 15:07:19.158877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.803 [2024-11-18 15:07:19.302625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.803 [2024-11-18 15:07:19.368808] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:55.803 [2024-11-18 15:07:19.369010] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.803 [2024-11-18 15:07:19.369031] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.803 [2024-11-18 15:07:19.369042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.803 [2024-11-18 15:07:19.369215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.803 [2024-11-18 15:07:19.369599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.803 [2024-11-18 15:07:19.370218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.803 [2024-11-18 15:07:19.370265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.744 15:07:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.744 15:07:20 -- common/autotest_common.sh@862 -- # return 0 00:20:56.744 15:07:20 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:57.003 [2024-11-18 15:07:20.367052] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.003 15:07:20 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:57.003 15:07:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:57.003 15:07:20 -- common/autotest_common.sh@10 -- # set +x 00:20:57.003 15:07:20 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:57.263 Malloc1 00:20:57.263 15:07:20 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.522 15:07:20 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:57.781 15:07:21 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.042 [2024-11-18 15:07:21.390453] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.042 15:07:21 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:58.042 15:07:21 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:58.042 15:07:21 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:58.042 15:07:21 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:58.042 15:07:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:58.042 15:07:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:58.042 15:07:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:58.042 15:07:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:58.042 15:07:21 -- common/autotest_common.sh@1330 -- # shift 00:20:58.302 15:07:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:58.302 15:07:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.302 15:07:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:58.302 15:07:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:58.302 15:07:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:58.302 15:07:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:58.302 15:07:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:58.302 15:07:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.302 15:07:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:58.302 15:07:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:58.302 15:07:21 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:58.302 15:07:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:58.302 15:07:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:58.302 15:07:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:58.302 15:07:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:58.302 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:58.302 fio-3.35 00:20:58.302 Starting 1 thread 00:21:00.840 00:21:00.840 test: (groupid=0, jobs=1): err= 0: pid=94961: Mon Nov 18 15:07:24 2024 00:21:00.840 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(83.6MiB/2005msec) 00:21:00.840 slat (nsec): min=1641, max=361372, avg=2287.23, stdev=3480.63 00:21:00.840 clat (usec): min=3442, max=13030, avg=6378.98, stdev=620.34 00:21:00.840 lat (usec): min=3503, max=13032, avg=6381.27, stdev=620.36 00:21:00.840 clat percentiles (usec): 00:21:00.840 | 1.00th=[ 5276], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:21:00.840 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:21:00.840 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7308], 00:21:00.840 | 99.00th=[ 8225], 99.50th=[ 9110], 99.90th=[11600], 99.95th=[13042], 00:21:00.840 | 99.99th=[13042] 00:21:00.840 bw ( KiB/s): min=41656, max=43472, per=99.87%, avg=42618.00, stdev=880.18, samples=4 00:21:00.840 iops : min=10414, max=10868, avg=10654.50, stdev=220.04, samples=4 00:21:00.840 write: IOPS=10.7k, BW=41.6MiB/s (43.6MB/s)(83.4MiB/2005msec); 0 zone resets 00:21:00.840 slat (nsec): min=1729, max=325732, avg=2386.05, stdev=2682.13 00:21:00.840 clat (usec): min=2639, max=10254, avg=5577.58, stdev=482.32 00:21:00.840 lat (usec): min=2653, max=10256, avg=5579.97, stdev=482.36 00:21:00.840 clat percentiles (usec): 00:21:00.841 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5211], 00:21:00.841 | 30.00th=[ 5342], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:21:00.841 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6128], 95.00th=[ 6325], 00:21:00.841 | 99.00th=[ 6849], 99.50th=[ 7373], 99.90th=[ 9110], 99.95th=[ 9503], 00:21:00.841 | 99.99th=[10159] 00:21:00.841 bw ( KiB/s): min=41712, max=43848, per=100.00%, avg=42624.00, stdev=934.67, samples=4 00:21:00.841 iops : min=10428, max=10962, avg=10656.00, stdev=233.67, samples=4 00:21:00.841 lat (msec) : 4=0.07%, 10=99.76%, 20=0.18% 00:21:00.841 cpu : usr=66.12%, sys=23.60%, ctx=5, majf=0, minf=5 00:21:00.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:00.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:00.841 issued rwts: total=21391,21357,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:00.841 00:21:00.841 Run status group 0 (all jobs): 00:21:00.841 READ: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=83.6MiB (87.6MB), run=2005-2005msec 00:21:00.841 WRITE: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=83.4MiB (87.5MB), run=2005-2005msec 00:21:00.841 15:07:24 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.841 15:07:24 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.841 15:07:24 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:00.841 15:07:24 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.841 15:07:24 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:00.841 15:07:24 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.841 15:07:24 -- common/autotest_common.sh@1330 -- # shift 00:21:00.841 15:07:24 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:00.841 15:07:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.841 15:07:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.841 15:07:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:00.841 15:07:24 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:00.841 15:07:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:00.841 15:07:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:00.841 15:07:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.841 15:07:24 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:00.841 15:07:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.841 15:07:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:00.841 15:07:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:00.841 15:07:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:00.841 15:07:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:00.841 15:07:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.841 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:00.841 fio-3.35 00:21:00.841 Starting 1 thread 00:21:03.381 00:21:03.381 test: (groupid=0, jobs=1): err= 0: pid=95005: Mon Nov 18 15:07:26 2024 00:21:03.381 read: IOPS=9215, BW=144MiB/s (151MB/s)(289MiB/2006msec) 00:21:03.381 slat (usec): min=2, max=118, avg= 3.27, stdev= 2.25 00:21:03.381 clat (usec): min=2031, max=16287, avg=8369.97, stdev=1877.35 00:21:03.381 lat (usec): min=2034, max=16290, avg=8373.24, stdev=1877.46 00:21:03.381 clat percentiles (usec): 00:21:03.381 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6587], 00:21:03.381 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 8979], 00:21:03.381 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11469], 00:21:03.381 | 99.00th=[12911], 99.50th=[13566], 99.90th=[14222], 99.95th=[14484], 00:21:03.381 | 99.99th=[15139] 00:21:03.381 bw ( KiB/s): min=65920, max=84352, per=49.31%, avg=72704.00, stdev=8062.69, samples=4 00:21:03.381 iops : min= 4120, max= 5272, avg=4544.00, stdev=503.92, samples=4 00:21:03.381 write: IOPS=5529, BW=86.4MiB/s (90.6MB/s)(148MiB/1713msec); 0 zone resets 00:21:03.381 slat (usec): min=29, max=515, avg=32.86, stdev= 9.70 00:21:03.381 clat (usec): min=3231, max=15967, avg=9942.47, stdev=1505.36 00:21:03.381 lat (usec): min=3277, max=15997, avg=9975.33, stdev=1505.62 00:21:03.381 clat percentiles (usec): 00:21:03.381 | 1.00th=[ 7111], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 8717], 00:21:03.381 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:21:03.381 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11863], 95.00th=[12780], 00:21:03.381 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15795], 99.95th=[15926], 00:21:03.381 | 99.99th=[15926] 00:21:03.381 bw ( KiB/s): min=68384, max=87648, per=85.65%, avg=75776.00, stdev=8338.24, samples=4 00:21:03.381 iops : min= 4274, max= 5478, avg=4736.00, stdev=521.14, samples=4 00:21:03.381 lat (msec) : 4=0.30%, 10=71.89%, 20=27.81% 00:21:03.381 cpu : usr=67.23%, sys=21.10%, ctx=7, majf=0, minf=2 00:21:03.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:03.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.381 issued rwts: total=18486,9472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.381 00:21:03.381 Run status group 0 (all jobs): 00:21:03.381 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=289MiB (303MB), run=2006-2006msec 00:21:03.381 WRITE: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=148MiB (155MB), run=1713-1713msec 00:21:03.381 15:07:26 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.381 15:07:26 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:03.381 15:07:26 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:03.381 15:07:26 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:03.381 15:07:26 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:03.381 15:07:26 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:03.381 15:07:26 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:03.381 15:07:26 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:03.381 15:07:26 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:03.381 15:07:26 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:03.381 15:07:26 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:03.381 15:07:26 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:03.640 Nvme0n1 00:21:03.640 15:07:27 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:03.900 15:07:27 -- host/fio.sh@53 -- # ls_guid=7af9bf2f-31d0-4506-99c7-eed4b1c7e4c4 00:21:03.900 15:07:27 -- host/fio.sh@54 -- # get_lvs_free_mb 7af9bf2f-31d0-4506-99c7-eed4b1c7e4c4 00:21:03.900 15:07:27 -- common/autotest_common.sh@1353 -- # local lvs_uuid=7af9bf2f-31d0-4506-99c7-eed4b1c7e4c4 00:21:03.900 15:07:27 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:03.900 15:07:27 -- common/autotest_common.sh@1355 -- # local fc 00:21:03.900 15:07:27 -- common/autotest_common.sh@1356 -- # local cs 00:21:03.900 15:07:27 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:04.159 15:07:27 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:04.159 { 00:21:04.159 "base_bdev": "Nvme0n1", 00:21:04.159 "block_size": 4096, 00:21:04.159 "cluster_size": 1073741824, 00:21:04.159 "free_clusters": 4, 00:21:04.159 "name": "lvs_0", 00:21:04.159 "total_data_clusters": 4, 00:21:04.159 "uuid": "7af9bf2f-31d0-4506-99c7-eed4b1c7e4c4" 00:21:04.159 } 00:21:04.159 ]' 00:21:04.159 15:07:27 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="7af9bf2f-31d0-4506-99c7-eed4b1c7e4c4") .free_clusters' 00:21:04.159 15:07:27 -- common/autotest_common.sh@1358 -- # fc=4 00:21:04.159 15:07:27 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="7af9bf2f-31d0-4506-99c7-eed4b1c7e4c4") .cluster_size' 00:21:04.159 4096 00:21:04.159 15:07:27 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:04.159 15:07:27 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:04.159 15:07:27 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:04.159 15:07:27 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:04.419 1b9f5d28-87f3-46a1-9cdd-1902fd060ea3 00:21:04.419 15:07:27 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:04.678 15:07:28 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:04.937 15:07:28 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:05.197 15:07:28 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.197 15:07:28 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.197 15:07:28 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:05.197 15:07:28 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.197 15:07:28 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:05.197 15:07:28 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:05.197 15:07:28 -- common/autotest_common.sh@1330 -- # shift 00:21:05.197 15:07:28 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:05.197 15:07:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.197 15:07:28 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:05.197 15:07:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:05.197 15:07:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:05.197 15:07:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:05.197 15:07:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:05.197 15:07:28 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.197 15:07:28 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:05.197 15:07:28 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:05.197 15:07:28 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:05.197 15:07:28 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:05.197 15:07:28 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:05.197 15:07:28 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:05.197 15:07:28 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:05.456 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:05.456 fio-3.35 00:21:05.456 Starting 1 thread 00:21:08.028 00:21:08.028 test: (groupid=0, jobs=1): err= 0: pid=95161: Mon Nov 18 15:07:31 2024 00:21:08.028 read: IOPS=7413, BW=29.0MiB/s (30.4MB/s)(59.3MiB/2049msec) 00:21:08.028 slat (nsec): min=1750, max=265197, avg=2809.64, stdev=3887.50 00:21:08.028 clat (usec): min=3753, max=56028, avg=9254.54, stdev=2883.30 00:21:08.028 lat (usec): min=3759, max=56030, avg=9257.35, stdev=2883.26 00:21:08.028 clat percentiles (usec): 00:21:08.028 | 1.00th=[ 7111], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:21:08.028 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:21:08.028 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10683], 00:21:08.028 | 99.00th=[11469], 99.50th=[11994], 99.90th=[54789], 99.95th=[55313], 00:21:08.028 | 99.99th=[55837] 00:21:08.028 bw ( KiB/s): min=29240, max=31216, per=100.00%, avg=30260.00, stdev=828.72, samples=4 00:21:08.028 iops : min= 7310, max= 7804, avg=7565.00, stdev=207.18, samples=4 00:21:08.028 write: IOPS=7402, BW=28.9MiB/s (30.3MB/s)(59.2MiB/2049msec); 0 zone resets 00:21:08.028 slat (nsec): min=1791, max=191121, avg=2877.04, stdev=3046.44 00:21:08.028 clat (usec): min=1692, max=56049, avg=7956.93, stdev=3043.84 00:21:08.028 lat (usec): min=1699, max=56052, avg=7959.81, stdev=3043.80 00:21:08.028 clat percentiles (usec): 00:21:08.028 | 1.00th=[ 6063], 5.00th=[ 6587], 10.00th=[ 6849], 20.00th=[ 7177], 00:21:08.028 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7898], 00:21:08.028 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 9110], 00:21:08.028 | 99.00th=[10028], 99.50th=[10421], 99.90th=[54789], 99.95th=[55313], 00:21:08.028 | 99.99th=[55837] 00:21:08.028 bw ( KiB/s): min=29360, max=30808, per=100.00%, avg=30204.00, stdev=607.60, samples=4 00:21:08.028 iops : min= 7340, max= 7702, avg=7551.00, stdev=151.90, samples=4 00:21:08.028 lat (msec) : 2=0.01%, 4=0.06%, 10=91.50%, 20=8.01%, 50=0.08% 00:21:08.028 lat (msec) : 100=0.34% 00:21:08.028 cpu : usr=68.65%, sys=22.85%, ctx=5, majf=0, minf=5 00:21:08.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:08.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:08.028 issued rwts: total=15191,15168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:08.028 00:21:08.028 Run status group 0 (all jobs): 00:21:08.028 READ: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=59.3MiB (62.2MB), run=2049-2049msec 00:21:08.028 WRITE: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=59.2MiB (62.1MB), run=2049-2049msec 00:21:08.028 15:07:31 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:08.028 15:07:31 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:08.287 15:07:31 -- host/fio.sh@64 -- # ls_nested_guid=100a65f1-c1b6-48de-a4db-745e6f8d7c33 00:21:08.287 15:07:31 -- host/fio.sh@65 -- # get_lvs_free_mb 100a65f1-c1b6-48de-a4db-745e6f8d7c33 00:21:08.287 15:07:31 -- common/autotest_common.sh@1353 -- # local lvs_uuid=100a65f1-c1b6-48de-a4db-745e6f8d7c33 00:21:08.287 15:07:31 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:08.287 15:07:31 -- common/autotest_common.sh@1355 -- # local fc 00:21:08.287 15:07:31 -- common/autotest_common.sh@1356 -- # local cs 00:21:08.287 15:07:31 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:08.547 15:07:31 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:08.547 { 00:21:08.547 "base_bdev": "Nvme0n1", 00:21:08.547 "block_size": 4096, 00:21:08.547 "cluster_size": 1073741824, 00:21:08.547 "free_clusters": 0, 00:21:08.547 "name": "lvs_0", 00:21:08.547 "total_data_clusters": 4, 00:21:08.547 "uuid": "7af9bf2f-31d0-4506-99c7-eed4b1c7e4c4" 00:21:08.547 }, 00:21:08.547 { 00:21:08.547 "base_bdev": "1b9f5d28-87f3-46a1-9cdd-1902fd060ea3", 00:21:08.547 "block_size": 4096, 00:21:08.547 "cluster_size": 4194304, 00:21:08.547 "free_clusters": 1022, 00:21:08.547 "name": "lvs_n_0", 00:21:08.547 "total_data_clusters": 1022, 00:21:08.547 "uuid": "100a65f1-c1b6-48de-a4db-745e6f8d7c33" 00:21:08.547 } 00:21:08.547 ]' 00:21:08.547 15:07:31 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="100a65f1-c1b6-48de-a4db-745e6f8d7c33") .free_clusters' 00:21:08.547 15:07:32 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:08.547 15:07:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="100a65f1-c1b6-48de-a4db-745e6f8d7c33") .cluster_size' 00:21:08.547 4088 00:21:08.547 15:07:32 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:08.547 15:07:32 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:08.547 15:07:32 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:08.547 15:07:32 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:08.806 a6519510-3018-4ded-b00b-3989ea0a488c 00:21:08.806 15:07:32 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:09.064 15:07:32 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:09.322 15:07:32 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:09.582 15:07:33 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.582 15:07:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.582 15:07:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:09.582 15:07:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:09.582 15:07:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:09.582 15:07:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:09.582 15:07:33 -- common/autotest_common.sh@1330 -- # shift 00:21:09.582 15:07:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:09.582 15:07:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.582 15:07:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:09.582 15:07:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:09.582 15:07:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:09.582 15:07:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:09.582 15:07:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:09.582 15:07:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.582 15:07:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:09.582 15:07:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:09.582 15:07:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:09.582 15:07:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:09.582 15:07:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:09.582 15:07:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:09.582 15:07:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.841 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:09.841 fio-3.35 00:21:09.841 Starting 1 thread 00:21:12.375 00:21:12.375 test: (groupid=0, jobs=1): err= 0: pid=95278: Mon Nov 18 15:07:35 2024 00:21:12.375 read: IOPS=6653, BW=26.0MiB/s (27.3MB/s)(52.2MiB/2008msec) 00:21:12.375 slat (nsec): min=1760, max=348162, avg=2847.08, stdev=4780.18 00:21:12.375 clat (usec): min=4090, max=16311, avg=10333.59, stdev=1095.47 00:21:12.375 lat (usec): min=4099, max=16314, avg=10336.43, stdev=1095.26 00:21:12.375 clat percentiles (usec): 00:21:12.375 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9503], 00:21:12.375 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:21:12.375 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[12256], 00:21:12.375 | 99.00th=[13042], 99.50th=[13304], 99.90th=[15401], 99.95th=[16057], 00:21:12.375 | 99.99th=[16319] 00:21:12.375 bw ( KiB/s): min=24424, max=27544, per=99.89%, avg=26584.00, stdev=1450.50, samples=4 00:21:12.375 iops : min= 6106, max= 6886, avg=6646.00, stdev=362.62, samples=4 00:21:12.375 write: IOPS=6659, BW=26.0MiB/s (27.3MB/s)(52.2MiB/2008msec); 0 zone resets 00:21:12.375 slat (nsec): min=1869, max=244718, avg=3029.19, stdev=3663.77 00:21:12.375 clat (usec): min=2704, max=15876, avg=8839.06, stdev=908.60 00:21:12.375 lat (usec): min=2717, max=15878, avg=8842.09, stdev=908.52 00:21:12.375 clat percentiles (usec): 00:21:12.375 | 1.00th=[ 6783], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8094], 00:21:12.375 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:21:12.375 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10290], 00:21:12.376 | 99.00th=[11076], 99.50th=[11338], 99.90th=[13566], 99.95th=[14091], 00:21:12.376 | 99.99th=[15795] 00:21:12.376 bw ( KiB/s): min=25544, max=27336, per=99.96%, avg=26628.00, stdev=763.14, samples=4 00:21:12.376 iops : min= 6386, max= 6834, avg=6657.00, stdev=190.78, samples=4 00:21:12.376 lat (msec) : 4=0.03%, 10=65.06%, 20=34.90% 00:21:12.376 cpu : usr=69.41%, sys=22.32%, ctx=4, majf=0, minf=5 00:21:12.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:12.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:12.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:12.376 issued rwts: total=13360,13373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:12.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:12.376 00:21:12.376 Run status group 0 (all jobs): 00:21:12.376 READ: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=52.2MiB (54.7MB), run=2008-2008msec 00:21:12.376 WRITE: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=52.2MiB (54.8MB), run=2008-2008msec 00:21:12.376 15:07:35 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:12.376 15:07:35 -- host/fio.sh@74 -- # sync 00:21:12.376 15:07:35 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:12.635 15:07:35 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:12.894 15:07:36 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:12.894 15:07:36 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:13.152 15:07:36 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:13.725 15:07:37 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:13.725 15:07:37 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:13.725 15:07:37 -- host/fio.sh@86 -- # nvmftestfini 00:21:13.725 15:07:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:13.725 15:07:37 -- nvmf/common.sh@116 -- # sync 00:21:13.725 15:07:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:13.725 15:07:37 -- nvmf/common.sh@119 -- # set +e 00:21:13.725 15:07:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:13.725 15:07:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:13.725 rmmod nvme_tcp 00:21:13.725 rmmod nvme_fabrics 00:21:13.725 rmmod nvme_keyring 00:21:13.725 15:07:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:13.725 15:07:37 -- nvmf/common.sh@123 -- # set -e 00:21:13.725 15:07:37 -- nvmf/common.sh@124 -- # return 0 00:21:13.725 15:07:37 -- nvmf/common.sh@477 -- # '[' -n 94831 ']' 00:21:13.725 15:07:37 -- nvmf/common.sh@478 -- # killprocess 94831 00:21:13.725 15:07:37 -- common/autotest_common.sh@936 -- # '[' -z 94831 ']' 00:21:13.725 15:07:37 -- common/autotest_common.sh@940 -- # kill -0 94831 00:21:14.033 15:07:37 -- common/autotest_common.sh@941 -- # uname 00:21:14.033 15:07:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:14.033 15:07:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94831 00:21:14.033 killing process with pid 94831 00:21:14.033 15:07:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:14.033 15:07:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:14.033 15:07:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94831' 00:21:14.033 15:07:37 -- common/autotest_common.sh@955 -- # kill 94831 00:21:14.033 15:07:37 -- common/autotest_common.sh@960 -- # wait 94831 00:21:14.033 15:07:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:14.033 15:07:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:14.033 15:07:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:14.033 15:07:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.033 15:07:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:14.033 15:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.033 15:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.033 15:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.337 15:07:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:14.337 00:21:14.337 real 0m19.080s 00:21:14.337 user 1m23.260s 00:21:14.337 sys 0m4.484s 00:21:14.337 15:07:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:14.337 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:21:14.337 ************************************ 00:21:14.337 END TEST nvmf_fio_host 00:21:14.337 ************************************ 00:21:14.337 15:07:37 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:14.337 15:07:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:14.337 15:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.337 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:21:14.337 ************************************ 00:21:14.337 START TEST nvmf_failover 00:21:14.337 ************************************ 00:21:14.337 15:07:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:14.337 * Looking for test storage... 00:21:14.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:14.337 15:07:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:14.337 15:07:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:14.337 15:07:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:14.337 15:07:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:14.337 15:07:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:14.337 15:07:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:14.337 15:07:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:14.337 15:07:37 -- scripts/common.sh@335 -- # IFS=.-: 00:21:14.337 15:07:37 -- scripts/common.sh@335 -- # read -ra ver1 00:21:14.337 15:07:37 -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.337 15:07:37 -- scripts/common.sh@336 -- # read -ra ver2 00:21:14.337 15:07:37 -- scripts/common.sh@337 -- # local 'op=<' 00:21:14.337 15:07:37 -- scripts/common.sh@339 -- # ver1_l=2 00:21:14.337 15:07:37 -- scripts/common.sh@340 -- # ver2_l=1 00:21:14.337 15:07:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:14.337 15:07:37 -- scripts/common.sh@343 -- # case "$op" in 00:21:14.337 15:07:37 -- scripts/common.sh@344 -- # : 1 00:21:14.337 15:07:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:14.337 15:07:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.337 15:07:37 -- scripts/common.sh@364 -- # decimal 1 00:21:14.337 15:07:37 -- scripts/common.sh@352 -- # local d=1 00:21:14.337 15:07:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.337 15:07:37 -- scripts/common.sh@354 -- # echo 1 00:21:14.337 15:07:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:14.337 15:07:37 -- scripts/common.sh@365 -- # decimal 2 00:21:14.337 15:07:37 -- scripts/common.sh@352 -- # local d=2 00:21:14.337 15:07:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.337 15:07:37 -- scripts/common.sh@354 -- # echo 2 00:21:14.337 15:07:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:14.337 15:07:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:14.337 15:07:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:14.337 15:07:37 -- scripts/common.sh@367 -- # return 0 00:21:14.337 15:07:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.337 15:07:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:14.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.337 --rc genhtml_branch_coverage=1 00:21:14.337 --rc genhtml_function_coverage=1 00:21:14.337 --rc genhtml_legend=1 00:21:14.337 --rc geninfo_all_blocks=1 00:21:14.337 --rc geninfo_unexecuted_blocks=1 00:21:14.337 00:21:14.337 ' 00:21:14.337 15:07:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:14.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.337 --rc genhtml_branch_coverage=1 00:21:14.337 --rc genhtml_function_coverage=1 00:21:14.337 --rc genhtml_legend=1 00:21:14.337 --rc geninfo_all_blocks=1 00:21:14.337 --rc geninfo_unexecuted_blocks=1 00:21:14.337 00:21:14.337 ' 00:21:14.337 15:07:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:14.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.337 --rc genhtml_branch_coverage=1 00:21:14.337 --rc genhtml_function_coverage=1 00:21:14.337 --rc genhtml_legend=1 00:21:14.337 --rc geninfo_all_blocks=1 00:21:14.337 --rc geninfo_unexecuted_blocks=1 00:21:14.337 00:21:14.337 ' 00:21:14.337 15:07:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:14.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.337 --rc genhtml_branch_coverage=1 00:21:14.337 --rc genhtml_function_coverage=1 00:21:14.337 --rc genhtml_legend=1 00:21:14.337 --rc geninfo_all_blocks=1 00:21:14.337 --rc geninfo_unexecuted_blocks=1 00:21:14.337 00:21:14.337 ' 00:21:14.337 15:07:37 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:14.337 15:07:37 -- nvmf/common.sh@7 -- # uname -s 00:21:14.337 15:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.337 15:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.337 15:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.337 15:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.337 15:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.337 15:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.337 15:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.337 15:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.337 15:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.337 15:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.337 15:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:21:14.337 15:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:21:14.337 15:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.337 15:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.337 15:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:14.337 15:07:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.337 15:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.337 15:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.337 15:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.337 15:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.337 15:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.337 15:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.337 15:07:37 -- paths/export.sh@5 -- # export PATH 00:21:14.337 15:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.337 15:07:37 -- nvmf/common.sh@46 -- # : 0 00:21:14.337 15:07:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:14.337 15:07:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:14.337 15:07:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:14.337 15:07:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.337 15:07:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.338 15:07:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:14.338 15:07:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:14.338 15:07:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:14.338 15:07:37 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:14.338 15:07:37 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:14.338 15:07:37 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.338 15:07:37 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.338 15:07:37 -- host/failover.sh@18 -- # nvmftestinit 00:21:14.338 15:07:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:14.338 15:07:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.338 15:07:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:14.338 15:07:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:14.338 15:07:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:14.338 15:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.338 15:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.338 15:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.338 15:07:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:14.338 15:07:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:14.338 15:07:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:14.338 15:07:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:14.338 15:07:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:14.338 15:07:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:14.338 15:07:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.338 15:07:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.338 15:07:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:14.338 15:07:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:14.338 15:07:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:14.338 15:07:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:14.338 15:07:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:14.338 15:07:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.338 15:07:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:14.338 15:07:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:14.338 15:07:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:14.338 15:07:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:14.338 15:07:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:14.338 15:07:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:14.338 Cannot find device "nvmf_tgt_br" 00:21:14.338 15:07:37 -- nvmf/common.sh@154 -- # true 00:21:14.338 15:07:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:14.338 Cannot find device "nvmf_tgt_br2" 00:21:14.338 15:07:37 -- nvmf/common.sh@155 -- # true 00:21:14.338 15:07:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:14.338 15:07:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:14.338 Cannot find device "nvmf_tgt_br" 00:21:14.338 15:07:37 -- nvmf/common.sh@157 -- # true 00:21:14.338 15:07:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:14.610 Cannot find device "nvmf_tgt_br2" 00:21:14.610 15:07:37 -- nvmf/common.sh@158 -- # true 00:21:14.610 15:07:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:14.610 15:07:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:14.610 15:07:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:14.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.610 15:07:37 -- nvmf/common.sh@161 -- # true 00:21:14.610 15:07:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:14.610 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.610 15:07:37 -- nvmf/common.sh@162 -- # true 00:21:14.610 15:07:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:14.610 15:07:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:14.610 15:07:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:14.610 15:07:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:14.610 15:07:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:14.610 15:07:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:14.610 15:07:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:14.610 15:07:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:14.610 15:07:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:14.610 15:07:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:14.610 15:07:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:14.610 15:07:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:14.610 15:07:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:14.610 15:07:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:14.610 15:07:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:14.610 15:07:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:14.610 15:07:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:14.610 15:07:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:14.610 15:07:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:14.610 15:07:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:14.610 15:07:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:14.610 15:07:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:14.610 15:07:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:14.610 15:07:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:14.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:21:14.610 00:21:14.610 --- 10.0.0.2 ping statistics --- 00:21:14.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.610 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:14.610 15:07:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:14.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:14.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:14.610 00:21:14.610 --- 10.0.0.3 ping statistics --- 00:21:14.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.610 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:14.610 15:07:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:14.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:14.610 00:21:14.610 --- 10.0.0.1 ping statistics --- 00:21:14.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.610 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:14.610 15:07:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.610 15:07:38 -- nvmf/common.sh@421 -- # return 0 00:21:14.610 15:07:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:14.610 15:07:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.610 15:07:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:14.610 15:07:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:14.610 15:07:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.610 15:07:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:14.610 15:07:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:14.610 15:07:38 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:14.610 15:07:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:14.610 15:07:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.610 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:21:14.610 15:07:38 -- nvmf/common.sh@469 -- # nvmfpid=95560 00:21:14.610 15:07:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:14.610 15:07:38 -- nvmf/common.sh@470 -- # waitforlisten 95560 00:21:14.610 15:07:38 -- common/autotest_common.sh@829 -- # '[' -z 95560 ']' 00:21:14.610 15:07:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.610 15:07:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.610 15:07:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.610 15:07:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.610 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:21:14.869 [2024-11-18 15:07:38.234596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:14.869 [2024-11-18 15:07:38.234673] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.869 [2024-11-18 15:07:38.361921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:14.869 [2024-11-18 15:07:38.449369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:14.869 [2024-11-18 15:07:38.449530] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.869 [2024-11-18 15:07:38.449544] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.869 [2024-11-18 15:07:38.449551] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.869 [2024-11-18 15:07:38.449640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.869 [2024-11-18 15:07:38.450250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.869 [2024-11-18 15:07:38.450287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.804 15:07:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.804 15:07:39 -- common/autotest_common.sh@862 -- # return 0 00:21:15.804 15:07:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:15.804 15:07:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.804 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:21:15.804 15:07:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.804 15:07:39 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:16.063 [2024-11-18 15:07:39.543881] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.063 15:07:39 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:16.322 Malloc0 00:21:16.322 15:07:39 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.581 15:07:40 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.841 15:07:40 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.100 [2024-11-18 15:07:40.533744] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.100 15:07:40 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:17.358 [2024-11-18 15:07:40.733926] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:17.358 15:07:40 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:17.358 [2024-11-18 15:07:40.930203] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:17.618 15:07:40 -- host/failover.sh@31 -- # bdevperf_pid=95670 00:21:17.618 15:07:40 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:17.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.618 15:07:40 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.618 15:07:40 -- host/failover.sh@34 -- # waitforlisten 95670 /var/tmp/bdevperf.sock 00:21:17.618 15:07:40 -- common/autotest_common.sh@829 -- # '[' -z 95670 ']' 00:21:17.618 15:07:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.618 15:07:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.618 15:07:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.618 15:07:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.618 15:07:40 -- common/autotest_common.sh@10 -- # set +x 00:21:18.556 15:07:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.556 15:07:41 -- common/autotest_common.sh@862 -- # return 0 00:21:18.556 15:07:41 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.815 NVMe0n1 00:21:18.815 15:07:42 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:19.075 00:21:19.075 15:07:42 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.075 15:07:42 -- host/failover.sh@39 -- # run_test_pid=95719 00:21:19.075 15:07:42 -- host/failover.sh@41 -- # sleep 1 00:21:20.014 15:07:43 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.274 [2024-11-18 15:07:43.792010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792234] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792251] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.274 [2024-11-18 15:07:43.792259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 [2024-11-18 15:07:43.792386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b31c90 is same with the state(5) to be set 00:21:20.275 15:07:43 -- host/failover.sh@45 -- # sleep 3 00:21:23.568 15:07:46 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.568 00:21:23.568 15:07:47 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:23.827 [2024-11-18 15:07:47.330434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.827 [2024-11-18 15:07:47.330715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 [2024-11-18 15:07:47.330875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33380 is same with the state(5) to be set 00:21:23.828 15:07:47 -- host/failover.sh@50 -- # sleep 3 00:21:27.121 15:07:50 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.121 [2024-11-18 15:07:50.607226] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.121 15:07:50 -- host/failover.sh@55 -- # sleep 1 00:21:28.056 15:07:51 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:28.315 [2024-11-18 15:07:51.880984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.315 [2024-11-18 15:07:51.881173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881234] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881265] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.316 [2024-11-18 15:07:51.881472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33a60 is same with the state(5) to be set 00:21:28.575 15:07:51 -- host/failover.sh@59 -- # wait 95719 00:21:35.146 0 00:21:35.146 15:07:57 -- host/failover.sh@61 -- # killprocess 95670 00:21:35.146 15:07:57 -- common/autotest_common.sh@936 -- # '[' -z 95670 ']' 00:21:35.146 15:07:57 -- common/autotest_common.sh@940 -- # kill -0 95670 00:21:35.146 15:07:57 -- common/autotest_common.sh@941 -- # uname 00:21:35.146 15:07:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.146 15:07:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95670 00:21:35.146 killing process with pid 95670 00:21:35.146 15:07:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:35.146 15:07:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:35.146 15:07:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95670' 00:21:35.146 15:07:57 -- common/autotest_common.sh@955 -- # kill 95670 00:21:35.146 15:07:57 -- common/autotest_common.sh@960 -- # wait 95670 00:21:35.146 15:07:57 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:35.146 [2024-11-18 15:07:40.990107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:35.146 [2024-11-18 15:07:40.990186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95670 ] 00:21:35.146 [2024-11-18 15:07:41.122329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.146 [2024-11-18 15:07:41.192902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.146 Running I/O for 15 seconds... 00:21:35.146 [2024-11-18 15:07:43.792758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.146 [2024-11-18 15:07:43.792802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.146 [2024-11-18 15:07:43.792827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.146 [2024-11-18 15:07:43.792842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.792857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.792870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.792884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.792896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.792910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.792922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.792935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.792947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.792961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.792972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.792986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.792998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.147 [2024-11-18 15:07:43.793793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.793984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.147 [2024-11-18 15:07:43.793996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.794008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.147 [2024-11-18 15:07:43.794020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.147 [2024-11-18 15:07:43.794033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.794094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.794171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.794195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.794250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.794275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.794300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.794411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.794923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.794987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.795007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.795088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.795133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.795171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.795215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.148 [2024-11-18 15:07:43.795253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.148 [2024-11-18 15:07:43.795687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.148 [2024-11-18 15:07:43.795727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.795749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.795769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.795794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.795814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.795834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.795854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.795879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.795899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.795920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.795941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.795966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.795996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.796058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.796098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.796142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.796181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.796278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.796321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.796966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.796983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.797069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.797114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.797160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.149 [2024-11-18 15:07:43.797212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.797268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.797308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.797378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.797423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.797470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.797514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.797553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.149 [2024-11-18 15:07:43.797597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.149 [2024-11-18 15:07:43.797618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:43.797637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.797673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:43.797694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.797715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:43.797732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.797754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:43.797775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.797795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:43.797824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.797849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:43.797869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.797894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:43.797913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.797936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2388130 is same with the state(5) to be set 00:21:35.150 [2024-11-18 15:07:43.797959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:35.150 [2024-11-18 15:07:43.797972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:35.150 [2024-11-18 15:07:43.797997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12848 len:8 PRP1 0x0 PRP2 0x0 00:21:35.150 [2024-11-18 15:07:43.798018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.798091] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2388130 was disconnected and freed. reset controller. 00:21:35.150 [2024-11-18 15:07:43.798119] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:35.150 [2024-11-18 15:07:43.798191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.150 [2024-11-18 15:07:43.798220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.798241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.150 [2024-11-18 15:07:43.798261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.798288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.150 [2024-11-18 15:07:43.798306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.798357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.150 [2024-11-18 15:07:43.798380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:43.798399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.150 [2024-11-18 15:07:43.798470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2303cb0 (9): Bad file descriptor 00:21:35.150 [2024-11-18 15:07:43.800647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.150 [2024-11-18 15:07:43.828588] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:35.150 [2024-11-18 15:07:47.330984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.150 [2024-11-18 15:07:47.331692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.150 [2024-11-18 15:07:47.331705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.151 [2024-11-18 15:07:47.331895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.151 [2024-11-18 15:07:47.331926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.151 [2024-11-18 15:07:47.331974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.331987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.331998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.151 [2024-11-18 15:07:47.332298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.151 [2024-11-18 15:07:47.332419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.151 [2024-11-18 15:07:47.332526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.151 [2024-11-18 15:07:47.332539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.332574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.332622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.332670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.332718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.332979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.332992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.152 [2024-11-18 15:07:47.333540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.152 [2024-11-18 15:07:47.333564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.152 [2024-11-18 15:07:47.333577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.153 [2024-11-18 15:07:47.333815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.153 [2024-11-18 15:07:47.333839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.153 [2024-11-18 15:07:47.333869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.153 [2024-11-18 15:07:47.333917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.153 [2024-11-18 15:07:47.333940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.333977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.333992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.153 [2024-11-18 15:07:47.334040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.153 [2024-11-18 15:07:47.334074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.153 [2024-11-18 15:07:47.334122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.153 [2024-11-18 15:07:47.334354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2362b10 is same with the state(5) to be set 00:21:35.153 [2024-11-18 15:07:47.334387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:35.153 [2024-11-18 15:07:47.334397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:35.153 [2024-11-18 15:07:47.334411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75432 len:8 PRP1 0x0 PRP2 0x0 00:21:35.153 [2024-11-18 15:07:47.334422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334473] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2362b10 was disconnected and freed. reset controller. 00:21:35.153 [2024-11-18 15:07:47.334489] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:35.153 [2024-11-18 15:07:47.334537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.153 [2024-11-18 15:07:47.334572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.153 [2024-11-18 15:07:47.334612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.153 [2024-11-18 15:07:47.334636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.153 [2024-11-18 15:07:47.334659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.153 [2024-11-18 15:07:47.334671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.153 [2024-11-18 15:07:47.336705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.153 [2024-11-18 15:07:47.336750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2303cb0 (9): Bad file descriptor 00:21:35.153 [2024-11-18 15:07:47.368189] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:35.154 [2024-11-18 15:07:51.881593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.881981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.881993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.154 [2024-11-18 15:07:51.882898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.154 [2024-11-18 15:07:51.882910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.882953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.882980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.883862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.883976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.883988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.884001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.884013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.884046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.884058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.884071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.155 [2024-11-18 15:07:51.884083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.155 [2024-11-18 15:07:51.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.155 [2024-11-18 15:07:51.884107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.884156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.884560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.884587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.884628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.884655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.884802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.884926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.884952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.884965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.884992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.885050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.885094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.885135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.885162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.885188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.885230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.885266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.156 [2024-11-18 15:07:51.885293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.885326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.885368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.885455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.885494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.156 [2024-11-18 15:07:51.885508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.156 [2024-11-18 15:07:51.885528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.885984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.157 [2024-11-18 15:07:51.885996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.886009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a210 is same with the state(5) to be set 00:21:35.157 [2024-11-18 15:07:51.886029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:35.157 [2024-11-18 15:07:51.886039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:35.157 [2024-11-18 15:07:51.886049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3664 len:8 PRP1 0x0 PRP2 0x0 00:21:35.157 [2024-11-18 15:07:51.886061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.886116] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x238a210 was disconnected and freed. reset controller. 00:21:35.157 [2024-11-18 15:07:51.886132] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:35.157 [2024-11-18 15:07:51.886187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.157 [2024-11-18 15:07:51.886208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.886232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.157 [2024-11-18 15:07:51.886245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.886257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.157 [2024-11-18 15:07:51.886269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.886282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.157 [2024-11-18 15:07:51.886293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.157 [2024-11-18 15:07:51.886305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.157 [2024-11-18 15:07:51.888409] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.157 [2024-11-18 15:07:51.888450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2303cb0 (9): Bad file descriptor 00:21:35.157 [2024-11-18 15:07:51.905839] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:35.157 00:21:35.157 Latency(us) 00:21:35.157 [2024-11-18T15:07:58.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.157 [2024-11-18T15:07:58.747Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.157 Verification LBA range: start 0x0 length 0x4000 00:21:35.157 NVMe0n1 : 15.01 15370.95 60.04 276.50 0.00 8165.63 532.48 14834.97 00:21:35.157 [2024-11-18T15:07:58.747Z] =================================================================================================================== 00:21:35.157 [2024-11-18T15:07:58.747Z] Total : 15370.95 60.04 276.50 0.00 8165.63 532.48 14834.97 00:21:35.157 Received shutdown signal, test time was about 15.000000 seconds 00:21:35.157 00:21:35.157 Latency(us) 00:21:35.157 [2024-11-18T15:07:58.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.157 [2024-11-18T15:07:58.747Z] =================================================================================================================== 00:21:35.157 [2024-11-18T15:07:58.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.157 15:07:57 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:35.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.157 15:07:57 -- host/failover.sh@65 -- # count=3 00:21:35.157 15:07:57 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:35.157 15:07:57 -- host/failover.sh@73 -- # bdevperf_pid=95922 00:21:35.157 15:07:57 -- host/failover.sh@75 -- # waitforlisten 95922 /var/tmp/bdevperf.sock 00:21:35.157 15:07:57 -- common/autotest_common.sh@829 -- # '[' -z 95922 ']' 00:21:35.157 15:07:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.157 15:07:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.157 15:07:57 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:35.157 15:07:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.157 15:07:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.157 15:07:57 -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 15:07:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.416 15:07:58 -- common/autotest_common.sh@862 -- # return 0 00:21:35.416 15:07:58 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:35.675 [2024-11-18 15:07:59.043427] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:35.675 15:07:59 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:35.675 [2024-11-18 15:07:59.243625] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:35.675 15:07:59 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.933 NVMe0n1 00:21:36.192 15:07:59 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.450 00:21:36.450 15:07:59 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.709 00:21:36.709 15:08:00 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.709 15:08:00 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:36.967 15:08:00 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.226 15:08:00 -- host/failover.sh@87 -- # sleep 3 00:21:40.514 15:08:03 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.514 15:08:03 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:40.514 15:08:03 -- host/failover.sh@90 -- # run_test_pid=96060 00:21:40.514 15:08:03 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.514 15:08:03 -- host/failover.sh@92 -- # wait 96060 00:21:41.449 0 00:21:41.449 15:08:04 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:41.450 [2024-11-18 15:07:57.937878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:41.450 [2024-11-18 15:07:57.937997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95922 ] 00:21:41.450 [2024-11-18 15:07:58.072310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.450 [2024-11-18 15:07:58.137604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.450 [2024-11-18 15:08:00.597184] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:41.450 [2024-11-18 15:08:00.597282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.450 [2024-11-18 15:08:00.597307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.450 [2024-11-18 15:08:00.597346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.450 [2024-11-18 15:08:00.597364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.450 [2024-11-18 15:08:00.597377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.450 [2024-11-18 15:08:00.597390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.450 [2024-11-18 15:08:00.597403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.450 [2024-11-18 15:08:00.597416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.450 [2024-11-18 15:08:00.597428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.450 [2024-11-18 15:08:00.597484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:41.450 [2024-11-18 15:08:00.597513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddfcb0 (9): Bad file descriptor 00:21:41.450 [2024-11-18 15:08:00.605323] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:41.450 Running I/O for 1 seconds... 00:21:41.450 00:21:41.450 Latency(us) 00:21:41.450 [2024-11-18T15:08:05.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.450 [2024-11-18T15:08:05.040Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:41.450 Verification LBA range: start 0x0 length 0x4000 00:21:41.450 NVMe0n1 : 1.01 14886.62 58.15 0.00 0.00 8560.24 1273.48 10009.13 00:21:41.450 [2024-11-18T15:08:05.040Z] =================================================================================================================== 00:21:41.450 [2024-11-18T15:08:05.040Z] Total : 14886.62 58.15 0.00 0.00 8560.24 1273.48 10009.13 00:21:41.450 15:08:04 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.450 15:08:04 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:41.709 15:08:05 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.968 15:08:05 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.968 15:08:05 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:42.227 15:08:05 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:42.485 15:08:06 -- host/failover.sh@101 -- # sleep 3 00:21:45.808 15:08:09 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.808 15:08:09 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:45.808 15:08:09 -- host/failover.sh@108 -- # killprocess 95922 00:21:45.808 15:08:09 -- common/autotest_common.sh@936 -- # '[' -z 95922 ']' 00:21:45.808 15:08:09 -- common/autotest_common.sh@940 -- # kill -0 95922 00:21:45.808 15:08:09 -- common/autotest_common.sh@941 -- # uname 00:21:45.808 15:08:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.808 15:08:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95922 00:21:45.808 killing process with pid 95922 00:21:45.808 15:08:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:45.808 15:08:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:45.808 15:08:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95922' 00:21:45.808 15:08:09 -- common/autotest_common.sh@955 -- # kill 95922 00:21:45.808 15:08:09 -- common/autotest_common.sh@960 -- # wait 95922 00:21:46.067 15:08:09 -- host/failover.sh@110 -- # sync 00:21:46.067 15:08:09 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.327 15:08:09 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:46.327 15:08:09 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.327 15:08:09 -- host/failover.sh@116 -- # nvmftestfini 00:21:46.327 15:08:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:46.327 15:08:09 -- nvmf/common.sh@116 -- # sync 00:21:46.327 15:08:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:46.327 15:08:09 -- nvmf/common.sh@119 -- # set +e 00:21:46.327 15:08:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:46.327 15:08:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:46.327 rmmod nvme_tcp 00:21:46.327 rmmod nvme_fabrics 00:21:46.327 rmmod nvme_keyring 00:21:46.327 15:08:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:46.327 15:08:09 -- nvmf/common.sh@123 -- # set -e 00:21:46.327 15:08:09 -- nvmf/common.sh@124 -- # return 0 00:21:46.327 15:08:09 -- nvmf/common.sh@477 -- # '[' -n 95560 ']' 00:21:46.327 15:08:09 -- nvmf/common.sh@478 -- # killprocess 95560 00:21:46.327 15:08:09 -- common/autotest_common.sh@936 -- # '[' -z 95560 ']' 00:21:46.327 15:08:09 -- common/autotest_common.sh@940 -- # kill -0 95560 00:21:46.327 15:08:09 -- common/autotest_common.sh@941 -- # uname 00:21:46.327 15:08:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:46.327 15:08:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95560 00:21:46.327 15:08:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:46.327 15:08:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:46.327 killing process with pid 95560 00:21:46.327 15:08:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95560' 00:21:46.327 15:08:09 -- common/autotest_common.sh@955 -- # kill 95560 00:21:46.327 15:08:09 -- common/autotest_common.sh@960 -- # wait 95560 00:21:46.896 15:08:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:46.896 15:08:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:46.896 15:08:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:46.896 15:08:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.896 15:08:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:46.896 15:08:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.896 15:08:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.896 15:08:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.896 15:08:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:46.896 00:21:46.896 real 0m32.590s 00:21:46.896 user 2m5.704s 00:21:46.896 sys 0m4.910s 00:21:46.896 15:08:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:46.896 ************************************ 00:21:46.896 15:08:10 -- common/autotest_common.sh@10 -- # set +x 00:21:46.896 END TEST nvmf_failover 00:21:46.896 ************************************ 00:21:46.896 15:08:10 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:46.896 15:08:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:46.896 15:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:46.896 15:08:10 -- common/autotest_common.sh@10 -- # set +x 00:21:46.896 ************************************ 00:21:46.896 START TEST nvmf_discovery 00:21:46.896 ************************************ 00:21:46.896 15:08:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:46.896 * Looking for test storage... 00:21:46.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:46.896 15:08:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:46.896 15:08:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:46.896 15:08:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:46.896 15:08:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:46.896 15:08:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:46.896 15:08:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:46.896 15:08:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:46.896 15:08:10 -- scripts/common.sh@335 -- # IFS=.-: 00:21:46.896 15:08:10 -- scripts/common.sh@335 -- # read -ra ver1 00:21:46.896 15:08:10 -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.896 15:08:10 -- scripts/common.sh@336 -- # read -ra ver2 00:21:46.896 15:08:10 -- scripts/common.sh@337 -- # local 'op=<' 00:21:46.896 15:08:10 -- scripts/common.sh@339 -- # ver1_l=2 00:21:46.896 15:08:10 -- scripts/common.sh@340 -- # ver2_l=1 00:21:46.896 15:08:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:46.896 15:08:10 -- scripts/common.sh@343 -- # case "$op" in 00:21:46.896 15:08:10 -- scripts/common.sh@344 -- # : 1 00:21:46.896 15:08:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:46.896 15:08:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.896 15:08:10 -- scripts/common.sh@364 -- # decimal 1 00:21:46.896 15:08:10 -- scripts/common.sh@352 -- # local d=1 00:21:46.896 15:08:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.896 15:08:10 -- scripts/common.sh@354 -- # echo 1 00:21:46.896 15:08:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:46.896 15:08:10 -- scripts/common.sh@365 -- # decimal 2 00:21:46.896 15:08:10 -- scripts/common.sh@352 -- # local d=2 00:21:46.896 15:08:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.896 15:08:10 -- scripts/common.sh@354 -- # echo 2 00:21:46.896 15:08:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:46.896 15:08:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:46.896 15:08:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:46.896 15:08:10 -- scripts/common.sh@367 -- # return 0 00:21:46.896 15:08:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.896 15:08:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:46.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.896 --rc genhtml_branch_coverage=1 00:21:46.896 --rc genhtml_function_coverage=1 00:21:46.896 --rc genhtml_legend=1 00:21:46.896 --rc geninfo_all_blocks=1 00:21:46.896 --rc geninfo_unexecuted_blocks=1 00:21:46.896 00:21:46.896 ' 00:21:46.896 15:08:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:46.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.896 --rc genhtml_branch_coverage=1 00:21:46.896 --rc genhtml_function_coverage=1 00:21:46.896 --rc genhtml_legend=1 00:21:46.896 --rc geninfo_all_blocks=1 00:21:46.896 --rc geninfo_unexecuted_blocks=1 00:21:46.896 00:21:46.896 ' 00:21:46.896 15:08:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:46.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.896 --rc genhtml_branch_coverage=1 00:21:46.896 --rc genhtml_function_coverage=1 00:21:46.896 --rc genhtml_legend=1 00:21:46.896 --rc geninfo_all_blocks=1 00:21:46.896 --rc geninfo_unexecuted_blocks=1 00:21:46.896 00:21:46.896 ' 00:21:46.896 15:08:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:46.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.896 --rc genhtml_branch_coverage=1 00:21:46.896 --rc genhtml_function_coverage=1 00:21:46.896 --rc genhtml_legend=1 00:21:46.896 --rc geninfo_all_blocks=1 00:21:46.896 --rc geninfo_unexecuted_blocks=1 00:21:46.896 00:21:46.896 ' 00:21:46.896 15:08:10 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:46.896 15:08:10 -- nvmf/common.sh@7 -- # uname -s 00:21:46.896 15:08:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.896 15:08:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.896 15:08:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.896 15:08:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.896 15:08:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.896 15:08:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.896 15:08:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.896 15:08:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.896 15:08:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.896 15:08:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.896 15:08:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:21:46.896 15:08:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:21:46.896 15:08:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.896 15:08:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.896 15:08:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:46.896 15:08:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.896 15:08:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.897 15:08:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.897 15:08:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.897 15:08:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.897 15:08:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.897 15:08:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.897 15:08:10 -- paths/export.sh@5 -- # export PATH 00:21:46.897 15:08:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.897 15:08:10 -- nvmf/common.sh@46 -- # : 0 00:21:46.897 15:08:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:46.897 15:08:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:46.897 15:08:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:46.897 15:08:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.897 15:08:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.897 15:08:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:46.897 15:08:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:46.897 15:08:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:46.897 15:08:10 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:46.897 15:08:10 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:46.897 15:08:10 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:46.897 15:08:10 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:46.897 15:08:10 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:46.897 15:08:10 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:46.897 15:08:10 -- host/discovery.sh@25 -- # nvmftestinit 00:21:46.897 15:08:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:46.897 15:08:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.897 15:08:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:46.897 15:08:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:46.897 15:08:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:46.897 15:08:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.897 15:08:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.897 15:08:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.897 15:08:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:46.897 15:08:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:46.897 15:08:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:46.897 15:08:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:46.897 15:08:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:46.897 15:08:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:46.897 15:08:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.897 15:08:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.897 15:08:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:46.897 15:08:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:46.897 15:08:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:46.897 15:08:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:46.897 15:08:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:46.897 15:08:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.897 15:08:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:46.897 15:08:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:46.897 15:08:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:46.897 15:08:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:46.897 15:08:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:47.156 15:08:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:47.156 Cannot find device "nvmf_tgt_br" 00:21:47.156 15:08:10 -- nvmf/common.sh@154 -- # true 00:21:47.156 15:08:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.156 Cannot find device "nvmf_tgt_br2" 00:21:47.156 15:08:10 -- nvmf/common.sh@155 -- # true 00:21:47.156 15:08:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:47.156 15:08:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:47.156 Cannot find device "nvmf_tgt_br" 00:21:47.156 15:08:10 -- nvmf/common.sh@157 -- # true 00:21:47.156 15:08:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:47.156 Cannot find device "nvmf_tgt_br2" 00:21:47.156 15:08:10 -- nvmf/common.sh@158 -- # true 00:21:47.156 15:08:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:47.156 15:08:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:47.156 15:08:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:47.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.156 15:08:10 -- nvmf/common.sh@161 -- # true 00:21:47.156 15:08:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:47.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.156 15:08:10 -- nvmf/common.sh@162 -- # true 00:21:47.156 15:08:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:47.156 15:08:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:47.156 15:08:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:47.156 15:08:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:47.157 15:08:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:47.157 15:08:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:47.157 15:08:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:47.157 15:08:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:47.157 15:08:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:47.157 15:08:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:47.157 15:08:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:47.157 15:08:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:47.157 15:08:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:47.157 15:08:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:47.157 15:08:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:47.157 15:08:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:47.157 15:08:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:47.157 15:08:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:47.157 15:08:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:47.157 15:08:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:47.157 15:08:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:47.417 15:08:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:47.417 15:08:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:47.417 15:08:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:47.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:21:47.417 00:21:47.417 --- 10.0.0.2 ping statistics --- 00:21:47.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.417 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:21:47.417 15:08:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:47.417 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:47.417 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:21:47.417 00:21:47.417 --- 10.0.0.3 ping statistics --- 00:21:47.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.417 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:47.417 15:08:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:47.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:47.417 00:21:47.417 --- 10.0.0.1 ping statistics --- 00:21:47.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.417 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:47.417 15:08:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.417 15:08:10 -- nvmf/common.sh@421 -- # return 0 00:21:47.417 15:08:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:47.417 15:08:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.417 15:08:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:47.417 15:08:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:47.417 15:08:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.417 15:08:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:47.417 15:08:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:47.417 15:08:10 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:47.417 15:08:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:47.417 15:08:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.417 15:08:10 -- common/autotest_common.sh@10 -- # set +x 00:21:47.417 15:08:10 -- nvmf/common.sh@469 -- # nvmfpid=96376 00:21:47.417 15:08:10 -- nvmf/common.sh@470 -- # waitforlisten 96376 00:21:47.417 15:08:10 -- common/autotest_common.sh@829 -- # '[' -z 96376 ']' 00:21:47.417 15:08:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.417 15:08:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.417 15:08:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.417 15:08:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.417 15:08:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:47.417 15:08:10 -- common/autotest_common.sh@10 -- # set +x 00:21:47.417 [2024-11-18 15:08:10.849982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:47.417 [2024-11-18 15:08:10.850069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.417 [2024-11-18 15:08:10.989453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.676 [2024-11-18 15:08:11.067817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:47.676 [2024-11-18 15:08:11.067957] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.676 [2024-11-18 15:08:11.067969] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.676 [2024-11-18 15:08:11.067977] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.676 [2024-11-18 15:08:11.068006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.243 15:08:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.243 15:08:11 -- common/autotest_common.sh@862 -- # return 0 00:21:48.243 15:08:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:48.243 15:08:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.243 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:21:48.243 15:08:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.243 15:08:11 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:48.243 15:08:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.243 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:21:48.503 [2024-11-18 15:08:11.834927] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.503 15:08:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.503 15:08:11 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:48.503 15:08:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.503 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:21:48.503 [2024-11-18 15:08:11.847098] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:48.503 15:08:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.503 15:08:11 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:48.503 15:08:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.503 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:21:48.503 null0 00:21:48.503 15:08:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.503 15:08:11 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:48.503 15:08:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.503 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:21:48.503 null1 00:21:48.503 15:08:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.503 15:08:11 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:48.503 15:08:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.503 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:21:48.503 15:08:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.503 15:08:11 -- host/discovery.sh@45 -- # hostpid=96422 00:21:48.503 15:08:11 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:48.503 15:08:11 -- host/discovery.sh@46 -- # waitforlisten 96422 /tmp/host.sock 00:21:48.503 15:08:11 -- common/autotest_common.sh@829 -- # '[' -z 96422 ']' 00:21:48.503 15:08:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:48.503 15:08:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.503 15:08:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:48.503 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:48.503 15:08:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.503 15:08:11 -- common/autotest_common.sh@10 -- # set +x 00:21:48.503 [2024-11-18 15:08:11.939340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:48.503 [2024-11-18 15:08:11.939436] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96422 ] 00:21:48.503 [2024-11-18 15:08:12.081626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.762 [2024-11-18 15:08:12.158110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:48.762 [2024-11-18 15:08:12.158280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.334 15:08:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.334 15:08:12 -- common/autotest_common.sh@862 -- # return 0 00:21:49.334 15:08:12 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.334 15:08:12 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:49.334 15:08:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.334 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:49.334 15:08:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.334 15:08:12 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:49.334 15:08:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.334 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:49.334 15:08:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.334 15:08:12 -- host/discovery.sh@72 -- # notify_id=0 00:21:49.334 15:08:12 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:49.334 15:08:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.334 15:08:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.334 15:08:12 -- host/discovery.sh@59 -- # sort 00:21:49.334 15:08:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.334 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:49.334 15:08:12 -- host/discovery.sh@59 -- # xargs 00:21:49.334 15:08:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.595 15:08:12 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:49.596 15:08:12 -- host/discovery.sh@79 -- # get_bdev_list 00:21:49.596 15:08:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.596 15:08:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.596 15:08:12 -- host/discovery.sh@55 -- # sort 00:21:49.596 15:08:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.596 15:08:12 -- common/autotest_common.sh@10 -- # set +x 00:21:49.596 15:08:12 -- host/discovery.sh@55 -- # xargs 00:21:49.596 15:08:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.596 15:08:13 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:49.596 15:08:13 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:49.596 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.596 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.596 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.596 15:08:13 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:49.596 15:08:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.596 15:08:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.596 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.596 15:08:13 -- host/discovery.sh@59 -- # sort 00:21:49.596 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.596 15:08:13 -- host/discovery.sh@59 -- # xargs 00:21:49.596 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.596 15:08:13 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:49.596 15:08:13 -- host/discovery.sh@83 -- # get_bdev_list 00:21:49.596 15:08:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.596 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.596 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.596 15:08:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.596 15:08:13 -- host/discovery.sh@55 -- # sort 00:21:49.596 15:08:13 -- host/discovery.sh@55 -- # xargs 00:21:49.596 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.596 15:08:13 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:49.596 15:08:13 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:49.596 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.596 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.596 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.596 15:08:13 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:49.596 15:08:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.596 15:08:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.596 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.596 15:08:13 -- host/discovery.sh@59 -- # sort 00:21:49.596 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.596 15:08:13 -- host/discovery.sh@59 -- # xargs 00:21:49.596 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.855 15:08:13 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:49.855 15:08:13 -- host/discovery.sh@87 -- # get_bdev_list 00:21:49.855 15:08:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.855 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.855 15:08:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.855 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.855 15:08:13 -- host/discovery.sh@55 -- # sort 00:21:49.855 15:08:13 -- host/discovery.sh@55 -- # xargs 00:21:49.855 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.855 15:08:13 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:49.855 15:08:13 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.855 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.855 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.855 [2024-11-18 15:08:13.235916] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.855 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.855 15:08:13 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:49.855 15:08:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.855 15:08:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.855 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.855 15:08:13 -- host/discovery.sh@59 -- # sort 00:21:49.855 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.855 15:08:13 -- host/discovery.sh@59 -- # xargs 00:21:49.855 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.855 15:08:13 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:49.855 15:08:13 -- host/discovery.sh@93 -- # get_bdev_list 00:21:49.855 15:08:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.855 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.855 15:08:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.855 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.855 15:08:13 -- host/discovery.sh@55 -- # xargs 00:21:49.855 15:08:13 -- host/discovery.sh@55 -- # sort 00:21:49.855 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.855 15:08:13 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:49.855 15:08:13 -- host/discovery.sh@94 -- # get_notification_count 00:21:49.855 15:08:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:49.855 15:08:13 -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.855 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.855 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.855 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.855 15:08:13 -- host/discovery.sh@74 -- # notification_count=0 00:21:49.855 15:08:13 -- host/discovery.sh@75 -- # notify_id=0 00:21:49.855 15:08:13 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:49.855 15:08:13 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:49.855 15:08:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.855 15:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:49.855 15:08:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.855 15:08:13 -- host/discovery.sh@100 -- # sleep 1 00:21:50.423 [2024-11-18 15:08:13.899816] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:50.423 [2024-11-18 15:08:13.899844] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:50.423 [2024-11-18 15:08:13.899861] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:50.423 [2024-11-18 15:08:13.986045] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:50.681 [2024-11-18 15:08:14.041307] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:50.681 [2024-11-18 15:08:14.041340] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:50.940 15:08:14 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:50.940 15:08:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.940 15:08:14 -- host/discovery.sh@59 -- # sort 00:21:50.940 15:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.940 15:08:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.940 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:21:50.940 15:08:14 -- host/discovery.sh@59 -- # xargs 00:21:50.940 15:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.940 15:08:14 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.940 15:08:14 -- host/discovery.sh@102 -- # get_bdev_list 00:21:50.940 15:08:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.940 15:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.940 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:21:50.940 15:08:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.940 15:08:14 -- host/discovery.sh@55 -- # sort 00:21:50.940 15:08:14 -- host/discovery.sh@55 -- # xargs 00:21:50.940 15:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.940 15:08:14 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:50.940 15:08:14 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:50.940 15:08:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:50.940 15:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.940 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:21:50.940 15:08:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:50.940 15:08:14 -- host/discovery.sh@63 -- # sort -n 00:21:50.940 15:08:14 -- host/discovery.sh@63 -- # xargs 00:21:51.198 15:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.199 15:08:14 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:51.199 15:08:14 -- host/discovery.sh@104 -- # get_notification_count 00:21:51.199 15:08:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:51.199 15:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.199 15:08:14 -- host/discovery.sh@74 -- # jq '. | length' 00:21:51.199 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:21:51.199 15:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.199 15:08:14 -- host/discovery.sh@74 -- # notification_count=1 00:21:51.199 15:08:14 -- host/discovery.sh@75 -- # notify_id=1 00:21:51.199 15:08:14 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:51.199 15:08:14 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:51.199 15:08:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.199 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:21:51.199 15:08:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.199 15:08:14 -- host/discovery.sh@109 -- # sleep 1 00:21:52.134 15:08:15 -- host/discovery.sh@110 -- # get_bdev_list 00:21:52.134 15:08:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.134 15:08:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.134 15:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.134 15:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:52.134 15:08:15 -- host/discovery.sh@55 -- # sort 00:21:52.134 15:08:15 -- host/discovery.sh@55 -- # xargs 00:21:52.134 15:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.134 15:08:15 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.134 15:08:15 -- host/discovery.sh@111 -- # get_notification_count 00:21:52.134 15:08:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:52.134 15:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.134 15:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:52.134 15:08:15 -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.134 15:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.392 15:08:15 -- host/discovery.sh@74 -- # notification_count=1 00:21:52.392 15:08:15 -- host/discovery.sh@75 -- # notify_id=2 00:21:52.392 15:08:15 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:52.392 15:08:15 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:52.392 15:08:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.392 15:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:52.392 [2024-11-18 15:08:15.752788] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:52.392 [2024-11-18 15:08:15.753858] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:52.392 [2024-11-18 15:08:15.753889] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:52.392 15:08:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.392 15:08:15 -- host/discovery.sh@117 -- # sleep 1 00:21:52.392 [2024-11-18 15:08:15.839909] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:52.392 [2024-11-18 15:08:15.902105] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:52.392 [2024-11-18 15:08:15.902127] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:52.392 [2024-11-18 15:08:15.902133] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:53.328 15:08:16 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:53.328 15:08:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:53.328 15:08:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:53.328 15:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.328 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:21:53.328 15:08:16 -- host/discovery.sh@59 -- # xargs 00:21:53.328 15:08:16 -- host/discovery.sh@59 -- # sort 00:21:53.328 15:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.328 15:08:16 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.328 15:08:16 -- host/discovery.sh@119 -- # get_bdev_list 00:21:53.328 15:08:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.328 15:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.328 15:08:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:53.328 15:08:16 -- host/discovery.sh@55 -- # sort 00:21:53.328 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:21:53.328 15:08:16 -- host/discovery.sh@55 -- # xargs 00:21:53.328 15:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.328 15:08:16 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:53.328 15:08:16 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:53.328 15:08:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:53.328 15:08:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:53.328 15:08:16 -- host/discovery.sh@63 -- # sort -n 00:21:53.328 15:08:16 -- host/discovery.sh@63 -- # xargs 00:21:53.328 15:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.328 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:21:53.328 15:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.587 15:08:16 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:53.587 15:08:16 -- host/discovery.sh@121 -- # get_notification_count 00:21:53.587 15:08:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:53.587 15:08:16 -- host/discovery.sh@74 -- # jq '. | length' 00:21:53.587 15:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.587 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:21:53.587 15:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.587 15:08:16 -- host/discovery.sh@74 -- # notification_count=0 00:21:53.587 15:08:16 -- host/discovery.sh@75 -- # notify_id=2 00:21:53.587 15:08:16 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:53.587 15:08:16 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:53.587 15:08:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.587 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:21:53.587 [2024-11-18 15:08:16.993457] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:53.587 [2024-11-18 15:08:16.993498] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:53.587 15:08:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.587 15:08:16 -- host/discovery.sh@127 -- # sleep 1 00:21:53.587 [2024-11-18 15:08:16.999513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.587 [2024-11-18 15:08:16.999543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.587 [2024-11-18 15:08:16.999555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.587 [2024-11-18 15:08:16.999564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.587 [2024-11-18 15:08:16.999573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.587 [2024-11-18 15:08:16.999581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.587 [2024-11-18 15:08:16.999590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:53.587 [2024-11-18 15:08:16.999597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:53.588 [2024-11-18 15:08:16.999605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a570 is same with the state(5) to be set 00:21:53.588 [2024-11-18 15:08:17.009478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a570 (9): Bad file descriptor 00:21:53.588 [2024-11-18 15:08:17.019504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:53.588 [2024-11-18 15:08:17.019582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.019622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.019636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8a570 with addr=10.0.0.2, port=4420 00:21:53.588 [2024-11-18 15:08:17.019644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a570 is same with the state(5) to be set 00:21:53.588 [2024-11-18 15:08:17.019658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a570 (9): Bad file descriptor 00:21:53.588 [2024-11-18 15:08:17.019671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:53.588 [2024-11-18 15:08:17.019690] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:53.588 [2024-11-18 15:08:17.019698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:53.588 [2024-11-18 15:08:17.019711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:53.588 [2024-11-18 15:08:17.029546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:53.588 [2024-11-18 15:08:17.029610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.029647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.029660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8a570 with addr=10.0.0.2, port=4420 00:21:53.588 [2024-11-18 15:08:17.029669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a570 is same with the state(5) to be set 00:21:53.588 [2024-11-18 15:08:17.029682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a570 (9): Bad file descriptor 00:21:53.588 [2024-11-18 15:08:17.029693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:53.588 [2024-11-18 15:08:17.029701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:53.588 [2024-11-18 15:08:17.029708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:53.588 [2024-11-18 15:08:17.029720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:53.588 [2024-11-18 15:08:17.039586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:53.588 [2024-11-18 15:08:17.039655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.039695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.039708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8a570 with addr=10.0.0.2, port=4420 00:21:53.588 [2024-11-18 15:08:17.039718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a570 is same with the state(5) to be set 00:21:53.588 [2024-11-18 15:08:17.039731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a570 (9): Bad file descriptor 00:21:53.588 [2024-11-18 15:08:17.039743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:53.588 [2024-11-18 15:08:17.039750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:53.588 [2024-11-18 15:08:17.039757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:53.588 [2024-11-18 15:08:17.039769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:53.588 [2024-11-18 15:08:17.049628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:53.588 [2024-11-18 15:08:17.049690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.049726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.049739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8a570 with addr=10.0.0.2, port=4420 00:21:53.588 [2024-11-18 15:08:17.049748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a570 is same with the state(5) to be set 00:21:53.588 [2024-11-18 15:08:17.049762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a570 (9): Bad file descriptor 00:21:53.588 [2024-11-18 15:08:17.049773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:53.588 [2024-11-18 15:08:17.049781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:53.588 [2024-11-18 15:08:17.049788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:53.588 [2024-11-18 15:08:17.049799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:53.588 [2024-11-18 15:08:17.059667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:53.588 [2024-11-18 15:08:17.059728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.059764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.059777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8a570 with addr=10.0.0.2, port=4420 00:21:53.588 [2024-11-18 15:08:17.059786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a570 is same with the state(5) to be set 00:21:53.588 [2024-11-18 15:08:17.059799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a570 (9): Bad file descriptor 00:21:53.588 [2024-11-18 15:08:17.059811] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:53.588 [2024-11-18 15:08:17.059818] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:53.588 [2024-11-18 15:08:17.059825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:53.588 [2024-11-18 15:08:17.059836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:53.588 [2024-11-18 15:08:17.069705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:53.588 [2024-11-18 15:08:17.069766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.069801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.588 [2024-11-18 15:08:17.069814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8a570 with addr=10.0.0.2, port=4420 00:21:53.588 [2024-11-18 15:08:17.069823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a570 is same with the state(5) to be set 00:21:53.588 [2024-11-18 15:08:17.069836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8a570 (9): Bad file descriptor 00:21:53.588 [2024-11-18 15:08:17.069847] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:53.588 [2024-11-18 15:08:17.069854] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:53.588 [2024-11-18 15:08:17.069862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:53.588 [2024-11-18 15:08:17.069873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:53.588 [2024-11-18 15:08:17.079520] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:53.588 [2024-11-18 15:08:17.079544] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:54.525 15:08:18 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:54.525 15:08:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:54.525 15:08:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:54.525 15:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.525 15:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:54.525 15:08:18 -- host/discovery.sh@59 -- # sort 00:21:54.525 15:08:18 -- host/discovery.sh@59 -- # xargs 00:21:54.525 15:08:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.525 15:08:18 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.525 15:08:18 -- host/discovery.sh@129 -- # get_bdev_list 00:21:54.525 15:08:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.525 15:08:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:54.525 15:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.525 15:08:18 -- host/discovery.sh@55 -- # sort 00:21:54.525 15:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:54.525 15:08:18 -- host/discovery.sh@55 -- # xargs 00:21:54.525 15:08:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.784 15:08:18 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:54.784 15:08:18 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:54.784 15:08:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:54.784 15:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.784 15:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:54.784 15:08:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:54.784 15:08:18 -- host/discovery.sh@63 -- # xargs 00:21:54.784 15:08:18 -- host/discovery.sh@63 -- # sort -n 00:21:54.784 15:08:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.784 15:08:18 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:54.784 15:08:18 -- host/discovery.sh@131 -- # get_notification_count 00:21:54.784 15:08:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:54.784 15:08:18 -- host/discovery.sh@74 -- # jq '. | length' 00:21:54.784 15:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.784 15:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:54.784 15:08:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.784 15:08:18 -- host/discovery.sh@74 -- # notification_count=0 00:21:54.784 15:08:18 -- host/discovery.sh@75 -- # notify_id=2 00:21:54.784 15:08:18 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:54.784 15:08:18 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:54.784 15:08:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.784 15:08:18 -- common/autotest_common.sh@10 -- # set +x 00:21:54.784 15:08:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.784 15:08:18 -- host/discovery.sh@135 -- # sleep 1 00:21:55.731 15:08:19 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:55.731 15:08:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:55.731 15:08:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:55.731 15:08:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.731 15:08:19 -- host/discovery.sh@59 -- # sort 00:21:55.732 15:08:19 -- common/autotest_common.sh@10 -- # set +x 00:21:55.732 15:08:19 -- host/discovery.sh@59 -- # xargs 00:21:55.732 15:08:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.732 15:08:19 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:55.732 15:08:19 -- host/discovery.sh@137 -- # get_bdev_list 00:21:55.732 15:08:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.732 15:08:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.732 15:08:19 -- common/autotest_common.sh@10 -- # set +x 00:21:55.732 15:08:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:55.732 15:08:19 -- host/discovery.sh@55 -- # sort 00:21:55.732 15:08:19 -- host/discovery.sh@55 -- # xargs 00:21:55.732 15:08:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.994 15:08:19 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:55.994 15:08:19 -- host/discovery.sh@138 -- # get_notification_count 00:21:55.994 15:08:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:55.994 15:08:19 -- host/discovery.sh@74 -- # jq '. | length' 00:21:55.994 15:08:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.994 15:08:19 -- common/autotest_common.sh@10 -- # set +x 00:21:55.994 15:08:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.994 15:08:19 -- host/discovery.sh@74 -- # notification_count=2 00:21:55.994 15:08:19 -- host/discovery.sh@75 -- # notify_id=4 00:21:55.994 15:08:19 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:55.994 15:08:19 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.994 15:08:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.994 15:08:19 -- common/autotest_common.sh@10 -- # set +x 00:21:56.929 [2024-11-18 15:08:20.420598] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:56.929 [2024-11-18 15:08:20.420623] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:56.929 [2024-11-18 15:08:20.420639] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:56.929 [2024-11-18 15:08:20.506678] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:57.188 [2024-11-18 15:08:20.565586] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:57.188 [2024-11-18 15:08:20.565624] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:57.188 15:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.188 15:08:20 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:57.188 15:08:20 -- common/autotest_common.sh@650 -- # local es=0 00:21:57.188 15:08:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:57.188 15:08:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:57.188 15:08:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.188 15:08:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:57.188 15:08:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.188 15:08:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:57.188 15:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.188 15:08:20 -- common/autotest_common.sh@10 -- # set +x 00:21:57.188 2024/11/18 15:08:20 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:57.188 request: 00:21:57.188 { 00:21:57.188 "method": "bdev_nvme_start_discovery", 00:21:57.188 "params": { 00:21:57.188 "name": "nvme", 00:21:57.188 "trtype": "tcp", 00:21:57.188 "traddr": "10.0.0.2", 00:21:57.188 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:57.188 "adrfam": "ipv4", 00:21:57.188 "trsvcid": "8009", 00:21:57.188 "wait_for_attach": true 00:21:57.188 } 00:21:57.188 } 00:21:57.188 Got JSON-RPC error response 00:21:57.188 GoRPCClient: error on JSON-RPC call 00:21:57.188 15:08:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:57.188 15:08:20 -- common/autotest_common.sh@653 -- # es=1 00:21:57.188 15:08:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:57.188 15:08:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:57.188 15:08:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:57.188 15:08:20 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:57.188 15:08:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:57.188 15:08:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:57.188 15:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.188 15:08:20 -- common/autotest_common.sh@10 -- # set +x 00:21:57.188 15:08:20 -- host/discovery.sh@67 -- # sort 00:21:57.188 15:08:20 -- host/discovery.sh@67 -- # xargs 00:21:57.188 15:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.188 15:08:20 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:57.188 15:08:20 -- host/discovery.sh@147 -- # get_bdev_list 00:21:57.188 15:08:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.188 15:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.188 15:08:20 -- common/autotest_common.sh@10 -- # set +x 00:21:57.188 15:08:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:57.188 15:08:20 -- host/discovery.sh@55 -- # sort 00:21:57.188 15:08:20 -- host/discovery.sh@55 -- # xargs 00:21:57.188 15:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.188 15:08:20 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:57.188 15:08:20 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:57.188 15:08:20 -- common/autotest_common.sh@650 -- # local es=0 00:21:57.188 15:08:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:57.188 15:08:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:57.188 15:08:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.188 15:08:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:57.188 15:08:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.188 15:08:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:57.188 15:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.188 15:08:20 -- common/autotest_common.sh@10 -- # set +x 00:21:57.188 2024/11/18 15:08:20 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:57.188 request: 00:21:57.188 { 00:21:57.188 "method": "bdev_nvme_start_discovery", 00:21:57.188 "params": { 00:21:57.188 "name": "nvme_second", 00:21:57.188 "trtype": "tcp", 00:21:57.188 "traddr": "10.0.0.2", 00:21:57.188 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:57.188 "adrfam": "ipv4", 00:21:57.188 "trsvcid": "8009", 00:21:57.188 "wait_for_attach": true 00:21:57.188 } 00:21:57.188 } 00:21:57.188 Got JSON-RPC error response 00:21:57.188 GoRPCClient: error on JSON-RPC call 00:21:57.188 15:08:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:57.188 15:08:20 -- common/autotest_common.sh@653 -- # es=1 00:21:57.188 15:08:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:57.188 15:08:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:57.188 15:08:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:57.188 15:08:20 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:57.188 15:08:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:57.188 15:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.188 15:08:20 -- common/autotest_common.sh@10 -- # set +x 00:21:57.188 15:08:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:57.188 15:08:20 -- host/discovery.sh@67 -- # sort 00:21:57.188 15:08:20 -- host/discovery.sh@67 -- # xargs 00:21:57.188 15:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.188 15:08:20 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:57.188 15:08:20 -- host/discovery.sh@153 -- # get_bdev_list 00:21:57.188 15:08:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.188 15:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.188 15:08:20 -- common/autotest_common.sh@10 -- # set +x 00:21:57.188 15:08:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:57.188 15:08:20 -- host/discovery.sh@55 -- # sort 00:21:57.188 15:08:20 -- host/discovery.sh@55 -- # xargs 00:21:57.447 15:08:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.447 15:08:20 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:57.447 15:08:20 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:57.447 15:08:20 -- common/autotest_common.sh@650 -- # local es=0 00:21:57.447 15:08:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:57.447 15:08:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:57.447 15:08:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.447 15:08:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:57.447 15:08:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.447 15:08:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:57.447 15:08:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.447 15:08:20 -- common/autotest_common.sh@10 -- # set +x 00:21:58.382 [2024-11-18 15:08:21.828036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:58.382 [2024-11-18 15:08:21.828096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:58.382 [2024-11-18 15:08:21.828113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf25f80 with addr=10.0.0.2, port=8010 00:21:58.382 [2024-11-18 15:08:21.828126] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:58.382 [2024-11-18 15:08:21.828134] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:58.382 [2024-11-18 15:08:21.828141] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:59.318 [2024-11-18 15:08:22.828016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.318 [2024-11-18 15:08:22.828069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.318 [2024-11-18 15:08:22.828085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefeca0 with addr=10.0.0.2, port=8010 00:21:59.318 [2024-11-18 15:08:22.828096] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:59.318 [2024-11-18 15:08:22.828104] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:59.318 [2024-11-18 15:08:22.828111] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:00.254 [2024-11-18 15:08:23.827957] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:00.254 2024/11/18 15:08:23 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:00.254 request: 00:22:00.254 { 00:22:00.254 "method": "bdev_nvme_start_discovery", 00:22:00.254 "params": { 00:22:00.254 "name": "nvme_second", 00:22:00.254 "trtype": "tcp", 00:22:00.254 "traddr": "10.0.0.2", 00:22:00.254 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:00.254 "adrfam": "ipv4", 00:22:00.254 "trsvcid": "8010", 00:22:00.254 "attach_timeout_ms": 3000 00:22:00.254 } 00:22:00.254 } 00:22:00.254 Got JSON-RPC error response 00:22:00.254 GoRPCClient: error on JSON-RPC call 00:22:00.254 15:08:23 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:00.254 15:08:23 -- common/autotest_common.sh@653 -- # es=1 00:22:00.254 15:08:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:00.254 15:08:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:00.254 15:08:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:00.254 15:08:23 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:00.254 15:08:23 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:00.254 15:08:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.254 15:08:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.254 15:08:23 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:00.254 15:08:23 -- host/discovery.sh@67 -- # sort 00:22:00.254 15:08:23 -- host/discovery.sh@67 -- # xargs 00:22:00.513 15:08:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.513 15:08:23 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:00.513 15:08:23 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:00.513 15:08:23 -- host/discovery.sh@162 -- # kill 96422 00:22:00.513 15:08:23 -- host/discovery.sh@163 -- # nvmftestfini 00:22:00.513 15:08:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:00.513 15:08:23 -- nvmf/common.sh@116 -- # sync 00:22:00.513 15:08:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:00.513 15:08:23 -- nvmf/common.sh@119 -- # set +e 00:22:00.513 15:08:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:00.513 15:08:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:00.513 rmmod nvme_tcp 00:22:00.513 rmmod nvme_fabrics 00:22:00.513 rmmod nvme_keyring 00:22:00.513 15:08:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:00.513 15:08:23 -- nvmf/common.sh@123 -- # set -e 00:22:00.513 15:08:23 -- nvmf/common.sh@124 -- # return 0 00:22:00.513 15:08:23 -- nvmf/common.sh@477 -- # '[' -n 96376 ']' 00:22:00.513 15:08:23 -- nvmf/common.sh@478 -- # killprocess 96376 00:22:00.513 15:08:23 -- common/autotest_common.sh@936 -- # '[' -z 96376 ']' 00:22:00.513 15:08:23 -- common/autotest_common.sh@940 -- # kill -0 96376 00:22:00.513 15:08:23 -- common/autotest_common.sh@941 -- # uname 00:22:00.513 15:08:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:00.513 15:08:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96376 00:22:00.513 killing process with pid 96376 00:22:00.513 15:08:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:00.513 15:08:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:00.513 15:08:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96376' 00:22:00.513 15:08:24 -- common/autotest_common.sh@955 -- # kill 96376 00:22:00.513 15:08:24 -- common/autotest_common.sh@960 -- # wait 96376 00:22:00.772 15:08:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:00.772 15:08:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:00.772 15:08:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:00.772 15:08:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.772 15:08:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:00.772 15:08:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.772 15:08:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.772 15:08:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.772 15:08:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:00.772 00:22:00.772 real 0m14.040s 00:22:00.772 user 0m27.431s 00:22:00.772 sys 0m1.643s 00:22:00.772 15:08:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:00.772 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:22:00.772 ************************************ 00:22:00.772 END TEST nvmf_discovery 00:22:00.772 ************************************ 00:22:01.032 15:08:24 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:01.032 15:08:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:01.032 15:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.032 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:22:01.032 ************************************ 00:22:01.032 START TEST nvmf_discovery_remove_ifc 00:22:01.032 ************************************ 00:22:01.032 15:08:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:01.032 * Looking for test storage... 00:22:01.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:01.032 15:08:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:01.032 15:08:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:01.032 15:08:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:01.032 15:08:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:01.032 15:08:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:01.032 15:08:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:01.032 15:08:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:01.032 15:08:24 -- scripts/common.sh@335 -- # IFS=.-: 00:22:01.032 15:08:24 -- scripts/common.sh@335 -- # read -ra ver1 00:22:01.032 15:08:24 -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.032 15:08:24 -- scripts/common.sh@336 -- # read -ra ver2 00:22:01.032 15:08:24 -- scripts/common.sh@337 -- # local 'op=<' 00:22:01.032 15:08:24 -- scripts/common.sh@339 -- # ver1_l=2 00:22:01.032 15:08:24 -- scripts/common.sh@340 -- # ver2_l=1 00:22:01.032 15:08:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:01.032 15:08:24 -- scripts/common.sh@343 -- # case "$op" in 00:22:01.032 15:08:24 -- scripts/common.sh@344 -- # : 1 00:22:01.032 15:08:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:01.032 15:08:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.032 15:08:24 -- scripts/common.sh@364 -- # decimal 1 00:22:01.032 15:08:24 -- scripts/common.sh@352 -- # local d=1 00:22:01.032 15:08:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.032 15:08:24 -- scripts/common.sh@354 -- # echo 1 00:22:01.032 15:08:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:01.032 15:08:24 -- scripts/common.sh@365 -- # decimal 2 00:22:01.032 15:08:24 -- scripts/common.sh@352 -- # local d=2 00:22:01.032 15:08:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.032 15:08:24 -- scripts/common.sh@354 -- # echo 2 00:22:01.032 15:08:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:01.032 15:08:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:01.032 15:08:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:01.032 15:08:24 -- scripts/common.sh@367 -- # return 0 00:22:01.032 15:08:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.032 15:08:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:01.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.032 --rc genhtml_branch_coverage=1 00:22:01.032 --rc genhtml_function_coverage=1 00:22:01.032 --rc genhtml_legend=1 00:22:01.032 --rc geninfo_all_blocks=1 00:22:01.032 --rc geninfo_unexecuted_blocks=1 00:22:01.032 00:22:01.032 ' 00:22:01.032 15:08:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:01.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.032 --rc genhtml_branch_coverage=1 00:22:01.032 --rc genhtml_function_coverage=1 00:22:01.032 --rc genhtml_legend=1 00:22:01.033 --rc geninfo_all_blocks=1 00:22:01.033 --rc geninfo_unexecuted_blocks=1 00:22:01.033 00:22:01.033 ' 00:22:01.033 15:08:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.033 --rc genhtml_branch_coverage=1 00:22:01.033 --rc genhtml_function_coverage=1 00:22:01.033 --rc genhtml_legend=1 00:22:01.033 --rc geninfo_all_blocks=1 00:22:01.033 --rc geninfo_unexecuted_blocks=1 00:22:01.033 00:22:01.033 ' 00:22:01.033 15:08:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.033 --rc genhtml_branch_coverage=1 00:22:01.033 --rc genhtml_function_coverage=1 00:22:01.033 --rc genhtml_legend=1 00:22:01.033 --rc geninfo_all_blocks=1 00:22:01.033 --rc geninfo_unexecuted_blocks=1 00:22:01.033 00:22:01.033 ' 00:22:01.033 15:08:24 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:01.033 15:08:24 -- nvmf/common.sh@7 -- # uname -s 00:22:01.033 15:08:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.033 15:08:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.033 15:08:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.033 15:08:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.033 15:08:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.033 15:08:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.033 15:08:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.033 15:08:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.033 15:08:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.033 15:08:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.033 15:08:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:22:01.033 15:08:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:22:01.033 15:08:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.033 15:08:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.033 15:08:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:01.033 15:08:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:01.033 15:08:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.033 15:08:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.033 15:08:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.033 15:08:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.033 15:08:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.033 15:08:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.033 15:08:24 -- paths/export.sh@5 -- # export PATH 00:22:01.033 15:08:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.033 15:08:24 -- nvmf/common.sh@46 -- # : 0 00:22:01.033 15:08:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:01.033 15:08:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:01.033 15:08:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:01.033 15:08:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.033 15:08:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.033 15:08:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:01.033 15:08:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:01.033 15:08:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:01.033 15:08:24 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:01.033 15:08:24 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:01.033 15:08:24 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:01.033 15:08:24 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:01.033 15:08:24 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:01.033 15:08:24 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:01.033 15:08:24 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:01.033 15:08:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:01.033 15:08:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.033 15:08:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:01.033 15:08:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:01.033 15:08:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:01.033 15:08:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.033 15:08:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.033 15:08:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.033 15:08:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:01.033 15:08:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:01.033 15:08:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:01.033 15:08:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:01.033 15:08:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:01.033 15:08:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:01.033 15:08:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.033 15:08:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.033 15:08:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:01.033 15:08:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:01.033 15:08:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:01.033 15:08:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:01.033 15:08:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:01.033 15:08:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.033 15:08:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:01.033 15:08:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:01.033 15:08:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:01.033 15:08:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:01.033 15:08:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:01.033 15:08:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:01.033 Cannot find device "nvmf_tgt_br" 00:22:01.033 15:08:24 -- nvmf/common.sh@154 -- # true 00:22:01.033 15:08:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:01.033 Cannot find device "nvmf_tgt_br2" 00:22:01.033 15:08:24 -- nvmf/common.sh@155 -- # true 00:22:01.033 15:08:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:01.033 15:08:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:01.292 Cannot find device "nvmf_tgt_br" 00:22:01.292 15:08:24 -- nvmf/common.sh@157 -- # true 00:22:01.292 15:08:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:01.292 Cannot find device "nvmf_tgt_br2" 00:22:01.292 15:08:24 -- nvmf/common.sh@158 -- # true 00:22:01.292 15:08:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:01.292 15:08:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:01.292 15:08:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:01.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.292 15:08:24 -- nvmf/common.sh@161 -- # true 00:22:01.292 15:08:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:01.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.292 15:08:24 -- nvmf/common.sh@162 -- # true 00:22:01.292 15:08:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:01.292 15:08:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:01.292 15:08:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:01.292 15:08:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:01.292 15:08:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:01.292 15:08:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:01.292 15:08:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:01.292 15:08:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:01.292 15:08:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:01.292 15:08:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:01.292 15:08:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:01.292 15:08:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:01.292 15:08:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:01.292 15:08:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:01.292 15:08:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:01.292 15:08:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:01.292 15:08:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:01.292 15:08:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:01.292 15:08:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:01.292 15:08:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:01.292 15:08:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:01.292 15:08:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:01.292 15:08:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:01.292 15:08:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:01.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:22:01.292 00:22:01.292 --- 10.0.0.2 ping statistics --- 00:22:01.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.292 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:22:01.292 15:08:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:01.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:01.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:01.292 00:22:01.292 --- 10.0.0.3 ping statistics --- 00:22:01.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.292 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:01.292 15:08:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:01.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:01.552 00:22:01.552 --- 10.0.0.1 ping statistics --- 00:22:01.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.552 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:01.552 15:08:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.552 15:08:24 -- nvmf/common.sh@421 -- # return 0 00:22:01.552 15:08:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:01.552 15:08:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.552 15:08:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:01.552 15:08:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:01.552 15:08:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.552 15:08:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:01.552 15:08:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:01.552 15:08:24 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:01.552 15:08:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:01.552 15:08:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.552 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:22:01.552 15:08:24 -- nvmf/common.sh@469 -- # nvmfpid=96931 00:22:01.552 15:08:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:01.552 15:08:24 -- nvmf/common.sh@470 -- # waitforlisten 96931 00:22:01.552 15:08:24 -- common/autotest_common.sh@829 -- # '[' -z 96931 ']' 00:22:01.552 15:08:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.552 15:08:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.552 15:08:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.552 15:08:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.552 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:22:01.552 [2024-11-18 15:08:24.964871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:01.552 [2024-11-18 15:08:24.964957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.552 [2024-11-18 15:08:25.100273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.811 [2024-11-18 15:08:25.191286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:01.811 [2024-11-18 15:08:25.191634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.811 [2024-11-18 15:08:25.191664] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.811 [2024-11-18 15:08:25.191675] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.811 [2024-11-18 15:08:25.191709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.747 15:08:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.747 15:08:25 -- common/autotest_common.sh@862 -- # return 0 00:22:02.747 15:08:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:02.747 15:08:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:02.747 15:08:25 -- common/autotest_common.sh@10 -- # set +x 00:22:02.747 15:08:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.747 15:08:26 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:02.747 15:08:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.747 15:08:26 -- common/autotest_common.sh@10 -- # set +x 00:22:02.747 [2024-11-18 15:08:26.038072] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.747 [2024-11-18 15:08:26.046218] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:02.747 null0 00:22:02.748 [2024-11-18 15:08:26.078106] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.748 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:02.748 15:08:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.748 15:08:26 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96987 00:22:02.748 15:08:26 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:02.748 15:08:26 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96987 /tmp/host.sock 00:22:02.748 15:08:26 -- common/autotest_common.sh@829 -- # '[' -z 96987 ']' 00:22:02.748 15:08:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:02.748 15:08:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.748 15:08:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:02.748 15:08:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.748 15:08:26 -- common/autotest_common.sh@10 -- # set +x 00:22:02.748 [2024-11-18 15:08:26.144343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:02.748 [2024-11-18 15:08:26.144579] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96987 ] 00:22:02.748 [2024-11-18 15:08:26.276438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.007 [2024-11-18 15:08:26.340848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:03.007 [2024-11-18 15:08:26.341291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.575 15:08:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.575 15:08:27 -- common/autotest_common.sh@862 -- # return 0 00:22:03.575 15:08:27 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.575 15:08:27 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:03.575 15:08:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.575 15:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:03.575 15:08:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.575 15:08:27 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:03.575 15:08:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.575 15:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:03.575 15:08:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.575 15:08:27 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:03.575 15:08:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.575 15:08:27 -- common/autotest_common.sh@10 -- # set +x 00:22:04.954 [2024-11-18 15:08:28.140976] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:04.954 [2024-11-18 15:08:28.141149] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:04.954 [2024-11-18 15:08:28.141179] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:04.954 [2024-11-18 15:08:28.227069] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:04.954 [2024-11-18 15:08:28.282659] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:04.954 [2024-11-18 15:08:28.282828] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:04.954 [2024-11-18 15:08:28.282892] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:04.954 [2024-11-18 15:08:28.282999] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:04.954 [2024-11-18 15:08:28.283068] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:04.954 15:08:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.954 [2024-11-18 15:08:28.289641] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x229bda0 was disconnected and freed. delete nvme_qpair. 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.954 15:08:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.954 15:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.954 15:08:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.954 15:08:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.954 15:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:04.954 15:08:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:04.954 15:08:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.891 15:08:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.891 15:08:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.891 15:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.891 15:08:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.891 15:08:29 -- common/autotest_common.sh@10 -- # set +x 00:22:05.891 15:08:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.891 15:08:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.891 15:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.891 15:08:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.891 15:08:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:07.269 15:08:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:07.269 15:08:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.269 15:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.269 15:08:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:07.269 15:08:30 -- common/autotest_common.sh@10 -- # set +x 00:22:07.269 15:08:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:07.269 15:08:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:07.269 15:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.269 15:08:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.269 15:08:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.206 15:08:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.206 15:08:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.206 15:08:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.206 15:08:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.206 15:08:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.206 15:08:31 -- common/autotest_common.sh@10 -- # set +x 00:22:08.206 15:08:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.206 15:08:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.206 15:08:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:08.206 15:08:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:09.142 15:08:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.142 15:08:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.142 15:08:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.142 15:08:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.142 15:08:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.142 15:08:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.142 15:08:32 -- common/autotest_common.sh@10 -- # set +x 00:22:09.142 15:08:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.142 15:08:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:09.142 15:08:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.079 15:08:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.079 15:08:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.079 15:08:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.079 15:08:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.338 15:08:33 -- common/autotest_common.sh@10 -- # set +x 00:22:10.338 15:08:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.338 15:08:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.338 15:08:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.338 [2024-11-18 15:08:33.710900] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:10.338 [2024-11-18 15:08:33.711097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.338 [2024-11-18 15:08:33.711278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.338 [2024-11-18 15:08:33.711425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.338 [2024-11-18 15:08:33.711444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.338 [2024-11-18 15:08:33.711453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.338 [2024-11-18 15:08:33.711462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.338 [2024-11-18 15:08:33.711471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.338 [2024-11-18 15:08:33.711479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.338 [2024-11-18 15:08:33.711487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.338 [2024-11-18 15:08:33.711495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.338 [2024-11-18 15:08:33.711503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2205690 is same with the state(5) to be set 00:22:10.338 [2024-11-18 15:08:33.720896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2205690 (9): Bad file descriptor 00:22:10.338 15:08:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:10.338 15:08:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.338 [2024-11-18 15:08:33.730913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:11.273 15:08:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:11.273 15:08:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:11.273 15:08:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.273 15:08:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.273 15:08:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:11.273 15:08:34 -- common/autotest_common.sh@10 -- # set +x 00:22:11.273 15:08:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:11.273 [2024-11-18 15:08:34.762377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:12.210 [2024-11-18 15:08:35.787441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:12.210 [2024-11-18 15:08:35.787782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2205690 with addr=10.0.0.2, port=4420 00:22:12.210 [2024-11-18 15:08:35.788124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2205690 is same with the state(5) to be set 00:22:12.210 [2024-11-18 15:08:35.788478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:12.210 [2024-11-18 15:08:35.788508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:12.210 [2024-11-18 15:08:35.788527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:12.210 [2024-11-18 15:08:35.788546] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:12.210 [2024-11-18 15:08:35.789268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2205690 (9): Bad file descriptor 00:22:12.210 [2024-11-18 15:08:35.789377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:12.210 [2024-11-18 15:08:35.789438] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:12.210 [2024-11-18 15:08:35.789503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.210 [2024-11-18 15:08:35.789531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.210 [2024-11-18 15:08:35.789556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.210 [2024-11-18 15:08:35.789575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.210 [2024-11-18 15:08:35.789596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.210 [2024-11-18 15:08:35.789615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.210 [2024-11-18 15:08:35.789635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.210 [2024-11-18 15:08:35.789654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.210 [2024-11-18 15:08:35.789674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.210 [2024-11-18 15:08:35.789693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.210 [2024-11-18 15:08:35.789711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:12.210 [2024-11-18 15:08:35.789768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2263410 (9): Bad file descriptor 00:22:12.210 [2024-11-18 15:08:35.790768] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:12.210 [2024-11-18 15:08:35.790811] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:12.469 15:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.469 15:08:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:12.469 15:08:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.406 15:08:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.406 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.406 15:08:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.406 15:08:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.406 15:08:36 -- common/autotest_common.sh@10 -- # set +x 00:22:13.406 15:08:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:13.406 15:08:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:14.342 [2024-11-18 15:08:37.802374] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:14.342 [2024-11-18 15:08:37.802394] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:14.342 [2024-11-18 15:08:37.802409] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:14.342 [2024-11-18 15:08:37.888475] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:14.600 15:08:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:14.600 15:08:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.600 15:08:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:14.600 15:08:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.600 15:08:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:14.600 15:08:37 -- common/autotest_common.sh@10 -- # set +x 00:22:14.600 15:08:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:14.600 15:08:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.600 [2024-11-18 15:08:37.943605] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:14.600 [2024-11-18 15:08:37.943662] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:14.600 [2024-11-18 15:08:37.943699] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:14.600 [2024-11-18 15:08:37.943729] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:14.600 [2024-11-18 15:08:37.943736] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:14.600 [2024-11-18 15:08:37.950962] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22690c0 was disconnected and freed. delete nvme_qpair. 00:22:14.600 15:08:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:14.600 15:08:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:15.536 15:08:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:15.536 15:08:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.536 15:08:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.536 15:08:38 -- common/autotest_common.sh@10 -- # set +x 00:22:15.536 15:08:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:15.536 15:08:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:15.536 15:08:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:15.536 15:08:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.536 15:08:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:15.536 15:08:39 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:15.536 15:08:39 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96987 00:22:15.536 15:08:39 -- common/autotest_common.sh@936 -- # '[' -z 96987 ']' 00:22:15.536 15:08:39 -- common/autotest_common.sh@940 -- # kill -0 96987 00:22:15.536 15:08:39 -- common/autotest_common.sh@941 -- # uname 00:22:15.536 15:08:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:15.536 15:08:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96987 00:22:15.536 killing process with pid 96987 00:22:15.536 15:08:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:15.536 15:08:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:15.536 15:08:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96987' 00:22:15.536 15:08:39 -- common/autotest_common.sh@955 -- # kill 96987 00:22:15.536 15:08:39 -- common/autotest_common.sh@960 -- # wait 96987 00:22:15.793 15:08:39 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:15.793 15:08:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:15.793 15:08:39 -- nvmf/common.sh@116 -- # sync 00:22:15.793 15:08:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:15.793 15:08:39 -- nvmf/common.sh@119 -- # set +e 00:22:15.793 15:08:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:15.793 15:08:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:15.793 rmmod nvme_tcp 00:22:15.793 rmmod nvme_fabrics 00:22:15.793 rmmod nvme_keyring 00:22:15.793 15:08:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:15.793 15:08:39 -- nvmf/common.sh@123 -- # set -e 00:22:15.793 15:08:39 -- nvmf/common.sh@124 -- # return 0 00:22:15.793 15:08:39 -- nvmf/common.sh@477 -- # '[' -n 96931 ']' 00:22:15.793 15:08:39 -- nvmf/common.sh@478 -- # killprocess 96931 00:22:15.793 15:08:39 -- common/autotest_common.sh@936 -- # '[' -z 96931 ']' 00:22:15.793 15:08:39 -- common/autotest_common.sh@940 -- # kill -0 96931 00:22:15.793 15:08:39 -- common/autotest_common.sh@941 -- # uname 00:22:15.793 15:08:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:15.793 15:08:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96931 00:22:16.067 killing process with pid 96931 00:22:16.067 15:08:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:16.067 15:08:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:16.067 15:08:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96931' 00:22:16.067 15:08:39 -- common/autotest_common.sh@955 -- # kill 96931 00:22:16.067 15:08:39 -- common/autotest_common.sh@960 -- # wait 96931 00:22:16.371 15:08:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:16.371 15:08:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:16.371 15:08:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:16.371 15:08:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.371 15:08:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:16.371 15:08:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.371 15:08:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.371 15:08:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.371 15:08:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:16.371 ************************************ 00:22:16.371 END TEST nvmf_discovery_remove_ifc 00:22:16.371 ************************************ 00:22:16.371 00:22:16.371 real 0m15.317s 00:22:16.371 user 0m26.417s 00:22:16.371 sys 0m1.604s 00:22:16.371 15:08:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:16.371 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:22:16.371 15:08:39 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:16.371 15:08:39 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:16.371 15:08:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:16.371 15:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.371 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:22:16.371 ************************************ 00:22:16.371 START TEST nvmf_digest 00:22:16.371 ************************************ 00:22:16.371 15:08:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:16.371 * Looking for test storage... 00:22:16.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:16.371 15:08:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:16.371 15:08:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:16.371 15:08:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:16.371 15:08:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:16.371 15:08:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:16.371 15:08:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:16.371 15:08:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:16.371 15:08:39 -- scripts/common.sh@335 -- # IFS=.-: 00:22:16.371 15:08:39 -- scripts/common.sh@335 -- # read -ra ver1 00:22:16.371 15:08:39 -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.371 15:08:39 -- scripts/common.sh@336 -- # read -ra ver2 00:22:16.371 15:08:39 -- scripts/common.sh@337 -- # local 'op=<' 00:22:16.371 15:08:39 -- scripts/common.sh@339 -- # ver1_l=2 00:22:16.371 15:08:39 -- scripts/common.sh@340 -- # ver2_l=1 00:22:16.371 15:08:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:16.371 15:08:39 -- scripts/common.sh@343 -- # case "$op" in 00:22:16.371 15:08:39 -- scripts/common.sh@344 -- # : 1 00:22:16.371 15:08:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:16.371 15:08:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.371 15:08:39 -- scripts/common.sh@364 -- # decimal 1 00:22:16.371 15:08:39 -- scripts/common.sh@352 -- # local d=1 00:22:16.371 15:08:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.371 15:08:39 -- scripts/common.sh@354 -- # echo 1 00:22:16.371 15:08:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:16.371 15:08:39 -- scripts/common.sh@365 -- # decimal 2 00:22:16.371 15:08:39 -- scripts/common.sh@352 -- # local d=2 00:22:16.371 15:08:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.371 15:08:39 -- scripts/common.sh@354 -- # echo 2 00:22:16.371 15:08:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:16.371 15:08:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:16.371 15:08:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:16.371 15:08:39 -- scripts/common.sh@367 -- # return 0 00:22:16.371 15:08:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.371 15:08:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:16.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.371 --rc genhtml_branch_coverage=1 00:22:16.371 --rc genhtml_function_coverage=1 00:22:16.371 --rc genhtml_legend=1 00:22:16.371 --rc geninfo_all_blocks=1 00:22:16.372 --rc geninfo_unexecuted_blocks=1 00:22:16.372 00:22:16.372 ' 00:22:16.372 15:08:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:16.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.372 --rc genhtml_branch_coverage=1 00:22:16.372 --rc genhtml_function_coverage=1 00:22:16.372 --rc genhtml_legend=1 00:22:16.372 --rc geninfo_all_blocks=1 00:22:16.372 --rc geninfo_unexecuted_blocks=1 00:22:16.372 00:22:16.372 ' 00:22:16.372 15:08:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:16.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.372 --rc genhtml_branch_coverage=1 00:22:16.372 --rc genhtml_function_coverage=1 00:22:16.372 --rc genhtml_legend=1 00:22:16.372 --rc geninfo_all_blocks=1 00:22:16.372 --rc geninfo_unexecuted_blocks=1 00:22:16.372 00:22:16.372 ' 00:22:16.372 15:08:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:16.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.372 --rc genhtml_branch_coverage=1 00:22:16.372 --rc genhtml_function_coverage=1 00:22:16.372 --rc genhtml_legend=1 00:22:16.372 --rc geninfo_all_blocks=1 00:22:16.372 --rc geninfo_unexecuted_blocks=1 00:22:16.372 00:22:16.372 ' 00:22:16.372 15:08:39 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.372 15:08:39 -- nvmf/common.sh@7 -- # uname -s 00:22:16.372 15:08:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.372 15:08:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.372 15:08:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.372 15:08:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.372 15:08:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.372 15:08:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.372 15:08:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.372 15:08:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.372 15:08:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.372 15:08:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.372 15:08:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:22:16.372 15:08:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:22:16.372 15:08:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.372 15:08:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.372 15:08:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.372 15:08:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.372 15:08:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.372 15:08:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.372 15:08:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.372 15:08:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.372 15:08:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.372 15:08:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.372 15:08:39 -- paths/export.sh@5 -- # export PATH 00:22:16.372 15:08:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.372 15:08:39 -- nvmf/common.sh@46 -- # : 0 00:22:16.372 15:08:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:16.372 15:08:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:16.372 15:08:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:16.372 15:08:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.372 15:08:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.372 15:08:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:16.372 15:08:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:16.372 15:08:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:16.372 15:08:39 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:16.372 15:08:39 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:16.372 15:08:39 -- host/digest.sh@16 -- # runtime=2 00:22:16.372 15:08:39 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:16.372 15:08:39 -- host/digest.sh@132 -- # nvmftestinit 00:22:16.372 15:08:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:16.372 15:08:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.372 15:08:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:16.372 15:08:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:16.372 15:08:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:16.372 15:08:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.372 15:08:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.372 15:08:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.372 15:08:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:16.372 15:08:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:16.372 15:08:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:16.372 15:08:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:16.372 15:08:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:16.372 15:08:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:16.372 15:08:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.372 15:08:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.372 15:08:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:16.372 15:08:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:16.372 15:08:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:16.372 15:08:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:16.372 15:08:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:16.372 15:08:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.372 15:08:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:16.372 15:08:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:16.372 15:08:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:16.372 15:08:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:16.372 15:08:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:16.631 15:08:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:16.631 Cannot find device "nvmf_tgt_br" 00:22:16.631 15:08:39 -- nvmf/common.sh@154 -- # true 00:22:16.631 15:08:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.631 Cannot find device "nvmf_tgt_br2" 00:22:16.631 15:08:39 -- nvmf/common.sh@155 -- # true 00:22:16.631 15:08:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:16.631 15:08:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:16.631 Cannot find device "nvmf_tgt_br" 00:22:16.631 15:08:39 -- nvmf/common.sh@157 -- # true 00:22:16.631 15:08:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:16.631 Cannot find device "nvmf_tgt_br2" 00:22:16.631 15:08:39 -- nvmf/common.sh@158 -- # true 00:22:16.631 15:08:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:16.631 15:08:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:16.631 15:08:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.631 15:08:40 -- nvmf/common.sh@161 -- # true 00:22:16.631 15:08:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.631 15:08:40 -- nvmf/common.sh@162 -- # true 00:22:16.631 15:08:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:16.631 15:08:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:16.631 15:08:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:16.631 15:08:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:16.631 15:08:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:16.631 15:08:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:16.631 15:08:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:16.631 15:08:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:16.631 15:08:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:16.631 15:08:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:16.631 15:08:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:16.631 15:08:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:16.631 15:08:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:16.631 15:08:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:16.631 15:08:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:16.631 15:08:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:16.631 15:08:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:16.631 15:08:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:16.631 15:08:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:16.631 15:08:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:16.631 15:08:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:16.890 15:08:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:16.890 15:08:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:16.890 15:08:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:16.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:22:16.890 00:22:16.890 --- 10.0.0.2 ping statistics --- 00:22:16.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.890 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:16.890 15:08:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:16.890 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:16.890 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:22:16.890 00:22:16.890 --- 10.0.0.3 ping statistics --- 00:22:16.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.890 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:16.890 15:08:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:16.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:22:16.890 00:22:16.890 --- 10.0.0.1 ping statistics --- 00:22:16.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.890 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:22:16.890 15:08:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.890 15:08:40 -- nvmf/common.sh@421 -- # return 0 00:22:16.890 15:08:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:16.890 15:08:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.890 15:08:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:16.890 15:08:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:16.890 15:08:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.890 15:08:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:16.890 15:08:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:16.890 15:08:40 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:16.890 15:08:40 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:16.890 15:08:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:16.890 15:08:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.890 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:22:16.890 ************************************ 00:22:16.890 START TEST nvmf_digest_clean 00:22:16.890 ************************************ 00:22:16.890 15:08:40 -- common/autotest_common.sh@1114 -- # run_digest 00:22:16.890 15:08:40 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:16.890 15:08:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:16.890 15:08:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.890 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:22:16.890 15:08:40 -- nvmf/common.sh@469 -- # nvmfpid=97427 00:22:16.890 15:08:40 -- nvmf/common.sh@470 -- # waitforlisten 97427 00:22:16.890 15:08:40 -- common/autotest_common.sh@829 -- # '[' -z 97427 ']' 00:22:16.890 15:08:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.890 15:08:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:16.890 15:08:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.890 15:08:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.891 15:08:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.891 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:22:16.891 [2024-11-18 15:08:40.330798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:16.891 [2024-11-18 15:08:40.330900] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.891 [2024-11-18 15:08:40.467097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.150 [2024-11-18 15:08:40.526604] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:17.150 [2024-11-18 15:08:40.526772] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.150 [2024-11-18 15:08:40.526785] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.150 [2024-11-18 15:08:40.526792] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.150 [2024-11-18 15:08:40.526822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.718 15:08:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.718 15:08:41 -- common/autotest_common.sh@862 -- # return 0 00:22:17.718 15:08:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:17.718 15:08:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.718 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:22:17.718 15:08:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.718 15:08:41 -- host/digest.sh@120 -- # common_target_config 00:22:17.718 15:08:41 -- host/digest.sh@43 -- # rpc_cmd 00:22:17.718 15:08:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.718 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 null0 00:22:17.977 [2024-11-18 15:08:41.405557] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.977 [2024-11-18 15:08:41.429672] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.977 15:08:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.977 15:08:41 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:17.977 15:08:41 -- host/digest.sh@77 -- # local rw bs qd 00:22:17.977 15:08:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:17.977 15:08:41 -- host/digest.sh@80 -- # rw=randread 00:22:17.977 15:08:41 -- host/digest.sh@80 -- # bs=4096 00:22:17.977 15:08:41 -- host/digest.sh@80 -- # qd=128 00:22:17.977 15:08:41 -- host/digest.sh@82 -- # bperfpid=97477 00:22:17.978 15:08:41 -- host/digest.sh@83 -- # waitforlisten 97477 /var/tmp/bperf.sock 00:22:17.978 15:08:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:17.978 15:08:41 -- common/autotest_common.sh@829 -- # '[' -z 97477 ']' 00:22:17.978 15:08:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:17.978 15:08:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:17.978 15:08:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:17.978 15:08:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.978 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:22:17.978 [2024-11-18 15:08:41.491114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:17.978 [2024-11-18 15:08:41.491215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97477 ] 00:22:18.237 [2024-11-18 15:08:41.638016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.237 [2024-11-18 15:08:41.706839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.237 15:08:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.237 15:08:41 -- common/autotest_common.sh@862 -- # return 0 00:22:18.237 15:08:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:18.237 15:08:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:18.237 15:08:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:18.806 15:08:42 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:18.806 15:08:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:18.806 nvme0n1 00:22:19.065 15:08:42 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:19.065 15:08:42 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:19.065 Running I/O for 2 seconds... 00:22:20.970 00:22:20.970 Latency(us) 00:22:20.970 [2024-11-18T15:08:44.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.970 [2024-11-18T15:08:44.560Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:20.970 nvme0n1 : 2.00 24442.03 95.48 0.00 0.00 5231.51 2368.23 17873.45 00:22:20.970 [2024-11-18T15:08:44.560Z] =================================================================================================================== 00:22:20.970 [2024-11-18T15:08:44.560Z] Total : 24442.03 95.48 0.00 0.00 5231.51 2368.23 17873.45 00:22:20.970 0 00:22:20.970 15:08:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:20.970 15:08:44 -- host/digest.sh@92 -- # get_accel_stats 00:22:20.970 15:08:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:21.228 15:08:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:21.228 15:08:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:21.228 | select(.opcode=="crc32c") 00:22:21.228 | "\(.module_name) \(.executed)"' 00:22:21.228 15:08:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:21.228 15:08:44 -- host/digest.sh@93 -- # exp_module=software 00:22:21.228 15:08:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:21.228 15:08:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:21.228 15:08:44 -- host/digest.sh@97 -- # killprocess 97477 00:22:21.228 15:08:44 -- common/autotest_common.sh@936 -- # '[' -z 97477 ']' 00:22:21.228 15:08:44 -- common/autotest_common.sh@940 -- # kill -0 97477 00:22:21.228 15:08:44 -- common/autotest_common.sh@941 -- # uname 00:22:21.228 15:08:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:21.228 15:08:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97477 00:22:21.487 15:08:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:21.487 15:08:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:21.488 killing process with pid 97477 00:22:21.488 15:08:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97477' 00:22:21.488 Received shutdown signal, test time was about 2.000000 seconds 00:22:21.488 00:22:21.488 Latency(us) 00:22:21.488 [2024-11-18T15:08:45.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.488 [2024-11-18T15:08:45.078Z] =================================================================================================================== 00:22:21.488 [2024-11-18T15:08:45.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.488 15:08:44 -- common/autotest_common.sh@955 -- # kill 97477 00:22:21.488 15:08:44 -- common/autotest_common.sh@960 -- # wait 97477 00:22:21.747 15:08:45 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:21.747 15:08:45 -- host/digest.sh@77 -- # local rw bs qd 00:22:21.747 15:08:45 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:21.747 15:08:45 -- host/digest.sh@80 -- # rw=randread 00:22:21.747 15:08:45 -- host/digest.sh@80 -- # bs=131072 00:22:21.747 15:08:45 -- host/digest.sh@80 -- # qd=16 00:22:21.747 15:08:45 -- host/digest.sh@82 -- # bperfpid=97548 00:22:21.747 15:08:45 -- host/digest.sh@83 -- # waitforlisten 97548 /var/tmp/bperf.sock 00:22:21.747 15:08:45 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:21.747 15:08:45 -- common/autotest_common.sh@829 -- # '[' -z 97548 ']' 00:22:21.747 15:08:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:21.747 15:08:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:21.747 15:08:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:21.747 15:08:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.747 15:08:45 -- common/autotest_common.sh@10 -- # set +x 00:22:21.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:21.747 Zero copy mechanism will not be used. 00:22:21.747 [2024-11-18 15:08:45.141438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:21.747 [2024-11-18 15:08:45.141537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97548 ] 00:22:21.747 [2024-11-18 15:08:45.278926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.005 [2024-11-18 15:08:45.339181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.573 15:08:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.573 15:08:46 -- common/autotest_common.sh@862 -- # return 0 00:22:22.573 15:08:46 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:22.573 15:08:46 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:22.573 15:08:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:22.832 15:08:46 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:22.832 15:08:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:23.090 nvme0n1 00:22:23.349 15:08:46 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:23.349 15:08:46 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:23.349 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:23.349 Zero copy mechanism will not be used. 00:22:23.349 Running I/O for 2 seconds... 00:22:25.253 00:22:25.253 Latency(us) 00:22:25.253 [2024-11-18T15:08:48.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.253 [2024-11-18T15:08:48.843Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:25.253 nvme0n1 : 2.00 9270.01 1158.75 0.00 0.00 1723.45 659.08 5391.83 00:22:25.253 [2024-11-18T15:08:48.843Z] =================================================================================================================== 00:22:25.253 [2024-11-18T15:08:48.843Z] Total : 9270.01 1158.75 0.00 0.00 1723.45 659.08 5391.83 00:22:25.253 0 00:22:25.253 15:08:48 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:25.253 15:08:48 -- host/digest.sh@92 -- # get_accel_stats 00:22:25.253 15:08:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:25.253 15:08:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:25.253 15:08:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:25.253 | select(.opcode=="crc32c") 00:22:25.253 | "\(.module_name) \(.executed)"' 00:22:25.512 15:08:49 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:25.512 15:08:49 -- host/digest.sh@93 -- # exp_module=software 00:22:25.512 15:08:49 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:25.512 15:08:49 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:25.512 15:08:49 -- host/digest.sh@97 -- # killprocess 97548 00:22:25.512 15:08:49 -- common/autotest_common.sh@936 -- # '[' -z 97548 ']' 00:22:25.512 15:08:49 -- common/autotest_common.sh@940 -- # kill -0 97548 00:22:25.512 15:08:49 -- common/autotest_common.sh@941 -- # uname 00:22:25.512 15:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.512 15:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97548 00:22:25.512 killing process with pid 97548 00:22:25.512 Received shutdown signal, test time was about 2.000000 seconds 00:22:25.512 00:22:25.512 Latency(us) 00:22:25.512 [2024-11-18T15:08:49.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.512 [2024-11-18T15:08:49.102Z] =================================================================================================================== 00:22:25.512 [2024-11-18T15:08:49.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.512 15:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:25.512 15:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:25.512 15:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97548' 00:22:25.512 15:08:49 -- common/autotest_common.sh@955 -- # kill 97548 00:22:25.512 15:08:49 -- common/autotest_common.sh@960 -- # wait 97548 00:22:25.770 15:08:49 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:25.770 15:08:49 -- host/digest.sh@77 -- # local rw bs qd 00:22:25.770 15:08:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:25.770 15:08:49 -- host/digest.sh@80 -- # rw=randwrite 00:22:25.770 15:08:49 -- host/digest.sh@80 -- # bs=4096 00:22:25.770 15:08:49 -- host/digest.sh@80 -- # qd=128 00:22:25.770 15:08:49 -- host/digest.sh@82 -- # bperfpid=97638 00:22:25.770 15:08:49 -- host/digest.sh@83 -- # waitforlisten 97638 /var/tmp/bperf.sock 00:22:25.770 15:08:49 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:25.770 15:08:49 -- common/autotest_common.sh@829 -- # '[' -z 97638 ']' 00:22:25.770 15:08:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:25.770 15:08:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.770 15:08:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:25.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:25.770 15:08:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.770 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:22:26.029 [2024-11-18 15:08:49.391862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:26.029 [2024-11-18 15:08:49.391976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97638 ] 00:22:26.029 [2024-11-18 15:08:49.529259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.029 [2024-11-18 15:08:49.596956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.962 15:08:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.962 15:08:50 -- common/autotest_common.sh@862 -- # return 0 00:22:26.962 15:08:50 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:26.962 15:08:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:26.962 15:08:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:27.221 15:08:50 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:27.221 15:08:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:27.480 nvme0n1 00:22:27.480 15:08:50 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:27.480 15:08:50 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:27.480 Running I/O for 2 seconds... 00:22:29.384 00:22:29.384 Latency(us) 00:22:29.384 [2024-11-18T15:08:52.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.384 [2024-11-18T15:08:52.974Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:29.384 nvme0n1 : 2.00 28880.39 112.81 0.00 0.00 4427.94 1876.71 15609.48 00:22:29.384 [2024-11-18T15:08:52.974Z] =================================================================================================================== 00:22:29.384 [2024-11-18T15:08:52.974Z] Total : 28880.39 112.81 0.00 0.00 4427.94 1876.71 15609.48 00:22:29.384 0 00:22:29.384 15:08:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:29.384 15:08:52 -- host/digest.sh@92 -- # get_accel_stats 00:22:29.384 15:08:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:29.384 15:08:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:29.384 15:08:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:29.384 | select(.opcode=="crc32c") 00:22:29.384 | "\(.module_name) \(.executed)"' 00:22:29.643 15:08:53 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:29.643 15:08:53 -- host/digest.sh@93 -- # exp_module=software 00:22:29.643 15:08:53 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:29.643 15:08:53 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:29.643 15:08:53 -- host/digest.sh@97 -- # killprocess 97638 00:22:29.643 15:08:53 -- common/autotest_common.sh@936 -- # '[' -z 97638 ']' 00:22:29.643 15:08:53 -- common/autotest_common.sh@940 -- # kill -0 97638 00:22:29.643 15:08:53 -- common/autotest_common.sh@941 -- # uname 00:22:29.643 15:08:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:29.643 15:08:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97638 00:22:29.902 killing process with pid 97638 00:22:29.902 Received shutdown signal, test time was about 2.000000 seconds 00:22:29.902 00:22:29.902 Latency(us) 00:22:29.902 [2024-11-18T15:08:53.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.902 [2024-11-18T15:08:53.492Z] =================================================================================================================== 00:22:29.902 [2024-11-18T15:08:53.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:29.902 15:08:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:29.902 15:08:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:29.902 15:08:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97638' 00:22:29.902 15:08:53 -- common/autotest_common.sh@955 -- # kill 97638 00:22:29.902 15:08:53 -- common/autotest_common.sh@960 -- # wait 97638 00:22:29.902 15:08:53 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:29.902 15:08:53 -- host/digest.sh@77 -- # local rw bs qd 00:22:29.902 15:08:53 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:29.902 15:08:53 -- host/digest.sh@80 -- # rw=randwrite 00:22:29.902 15:08:53 -- host/digest.sh@80 -- # bs=131072 00:22:29.902 15:08:53 -- host/digest.sh@80 -- # qd=16 00:22:29.902 15:08:53 -- host/digest.sh@82 -- # bperfpid=97724 00:22:29.902 15:08:53 -- host/digest.sh@83 -- # waitforlisten 97724 /var/tmp/bperf.sock 00:22:29.902 15:08:53 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:29.902 15:08:53 -- common/autotest_common.sh@829 -- # '[' -z 97724 ']' 00:22:29.902 15:08:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:29.902 15:08:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.903 15:08:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:29.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:29.903 15:08:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.903 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:22:30.162 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:30.162 Zero copy mechanism will not be used. 00:22:30.162 [2024-11-18 15:08:53.533380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:30.162 [2024-11-18 15:08:53.533487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97724 ] 00:22:30.162 [2024-11-18 15:08:53.667635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.162 [2024-11-18 15:08:53.733536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.099 15:08:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.099 15:08:54 -- common/autotest_common.sh@862 -- # return 0 00:22:31.099 15:08:54 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:31.099 15:08:54 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:31.099 15:08:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:31.358 15:08:54 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:31.358 15:08:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:31.617 nvme0n1 00:22:31.617 15:08:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:31.617 15:08:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:31.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:31.617 Zero copy mechanism will not be used. 00:22:31.617 Running I/O for 2 seconds... 00:22:34.152 00:22:34.152 Latency(us) 00:22:34.152 [2024-11-18T15:08:57.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.152 [2024-11-18T15:08:57.742Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:34.152 nvme0n1 : 2.00 7947.65 993.46 0.00 0.00 2009.16 1660.74 9234.62 00:22:34.152 [2024-11-18T15:08:57.742Z] =================================================================================================================== 00:22:34.152 [2024-11-18T15:08:57.742Z] Total : 7947.65 993.46 0.00 0.00 2009.16 1660.74 9234.62 00:22:34.152 0 00:22:34.152 15:08:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:34.152 15:08:57 -- host/digest.sh@92 -- # get_accel_stats 00:22:34.152 15:08:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:34.152 15:08:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:34.152 15:08:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:34.152 | select(.opcode=="crc32c") 00:22:34.152 | "\(.module_name) \(.executed)"' 00:22:34.152 15:08:57 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:34.152 15:08:57 -- host/digest.sh@93 -- # exp_module=software 00:22:34.152 15:08:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:34.152 15:08:57 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:34.152 15:08:57 -- host/digest.sh@97 -- # killprocess 97724 00:22:34.152 15:08:57 -- common/autotest_common.sh@936 -- # '[' -z 97724 ']' 00:22:34.152 15:08:57 -- common/autotest_common.sh@940 -- # kill -0 97724 00:22:34.152 15:08:57 -- common/autotest_common.sh@941 -- # uname 00:22:34.152 15:08:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:34.152 15:08:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97724 00:22:34.152 killing process with pid 97724 00:22:34.152 Received shutdown signal, test time was about 2.000000 seconds 00:22:34.152 00:22:34.153 Latency(us) 00:22:34.153 [2024-11-18T15:08:57.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.153 [2024-11-18T15:08:57.743Z] =================================================================================================================== 00:22:34.153 [2024-11-18T15:08:57.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:34.153 15:08:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:34.153 15:08:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:34.153 15:08:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97724' 00:22:34.153 15:08:57 -- common/autotest_common.sh@955 -- # kill 97724 00:22:34.153 15:08:57 -- common/autotest_common.sh@960 -- # wait 97724 00:22:34.153 15:08:57 -- host/digest.sh@126 -- # killprocess 97427 00:22:34.153 15:08:57 -- common/autotest_common.sh@936 -- # '[' -z 97427 ']' 00:22:34.153 15:08:57 -- common/autotest_common.sh@940 -- # kill -0 97427 00:22:34.153 15:08:57 -- common/autotest_common.sh@941 -- # uname 00:22:34.153 15:08:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:34.153 15:08:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97427 00:22:34.153 killing process with pid 97427 00:22:34.153 15:08:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:34.153 15:08:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:34.153 15:08:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97427' 00:22:34.153 15:08:57 -- common/autotest_common.sh@955 -- # kill 97427 00:22:34.153 15:08:57 -- common/autotest_common.sh@960 -- # wait 97427 00:22:34.412 ************************************ 00:22:34.412 END TEST nvmf_digest_clean 00:22:34.412 00:22:34.412 real 0m17.616s 00:22:34.412 user 0m31.950s 00:22:34.412 sys 0m5.238s 00:22:34.412 15:08:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:34.412 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:22:34.412 ************************************ 00:22:34.412 15:08:57 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:34.412 15:08:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:34.412 15:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:34.412 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:22:34.412 ************************************ 00:22:34.412 START TEST nvmf_digest_error 00:22:34.412 ************************************ 00:22:34.412 15:08:57 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:34.412 15:08:57 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:34.412 15:08:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:34.412 15:08:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.412 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:22:34.412 15:08:57 -- nvmf/common.sh@469 -- # nvmfpid=97844 00:22:34.412 15:08:57 -- nvmf/common.sh@470 -- # waitforlisten 97844 00:22:34.412 15:08:57 -- common/autotest_common.sh@829 -- # '[' -z 97844 ']' 00:22:34.412 15:08:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:34.412 15:08:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.412 15:08:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.412 15:08:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.412 15:08:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.412 15:08:57 -- common/autotest_common.sh@10 -- # set +x 00:22:34.670 [2024-11-18 15:08:58.009907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:34.670 [2024-11-18 15:08:58.010008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.670 [2024-11-18 15:08:58.149769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.670 [2024-11-18 15:08:58.204841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:34.670 [2024-11-18 15:08:58.204991] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.670 [2024-11-18 15:08:58.205004] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.670 [2024-11-18 15:08:58.205012] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.670 [2024-11-18 15:08:58.205041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.606 15:08:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.606 15:08:58 -- common/autotest_common.sh@862 -- # return 0 00:22:35.606 15:08:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:35.606 15:08:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.606 15:08:58 -- common/autotest_common.sh@10 -- # set +x 00:22:35.606 15:08:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.606 15:08:58 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:35.606 15:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.606 15:08:58 -- common/autotest_common.sh@10 -- # set +x 00:22:35.606 [2024-11-18 15:08:58.905554] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:35.606 15:08:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.606 15:08:58 -- host/digest.sh@104 -- # common_target_config 00:22:35.606 15:08:58 -- host/digest.sh@43 -- # rpc_cmd 00:22:35.606 15:08:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.606 15:08:58 -- common/autotest_common.sh@10 -- # set +x 00:22:35.606 null0 00:22:35.606 [2024-11-18 15:08:59.006224] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.606 [2024-11-18 15:08:59.030355] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.606 15:08:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.606 15:08:59 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:35.606 15:08:59 -- host/digest.sh@54 -- # local rw bs qd 00:22:35.606 15:08:59 -- host/digest.sh@56 -- # rw=randread 00:22:35.606 15:08:59 -- host/digest.sh@56 -- # bs=4096 00:22:35.606 15:08:59 -- host/digest.sh@56 -- # qd=128 00:22:35.606 15:08:59 -- host/digest.sh@58 -- # bperfpid=97888 00:22:35.606 15:08:59 -- host/digest.sh@60 -- # waitforlisten 97888 /var/tmp/bperf.sock 00:22:35.606 15:08:59 -- common/autotest_common.sh@829 -- # '[' -z 97888 ']' 00:22:35.606 15:08:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:35.606 15:08:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:35.606 15:08:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:35.606 15:08:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:35.606 15:08:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.606 15:08:59 -- common/autotest_common.sh@10 -- # set +x 00:22:35.607 [2024-11-18 15:08:59.090245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:35.607 [2024-11-18 15:08:59.090369] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97888 ] 00:22:35.865 [2024-11-18 15:08:59.222636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.865 [2024-11-18 15:08:59.298283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.430 15:09:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.430 15:09:00 -- common/autotest_common.sh@862 -- # return 0 00:22:36.430 15:09:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:36.430 15:09:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:36.689 15:09:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:36.689 15:09:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.689 15:09:00 -- common/autotest_common.sh@10 -- # set +x 00:22:36.689 15:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.689 15:09:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.689 15:09:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.948 nvme0n1 00:22:36.948 15:09:00 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:36.948 15:09:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.948 15:09:00 -- common/autotest_common.sh@10 -- # set +x 00:22:36.948 15:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.948 15:09:00 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:36.948 15:09:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.207 Running I/O for 2 seconds... 00:22:37.207 [2024-11-18 15:09:00.610876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.207 [2024-11-18 15:09:00.610938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.207 [2024-11-18 15:09:00.610951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.207 [2024-11-18 15:09:00.623372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.207 [2024-11-18 15:09:00.623402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.207 [2024-11-18 15:09:00.623414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.207 [2024-11-18 15:09:00.632026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.207 [2024-11-18 15:09:00.632056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.207 [2024-11-18 15:09:00.632068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.207 [2024-11-18 15:09:00.645224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.207 [2024-11-18 15:09:00.645256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.207 [2024-11-18 15:09:00.645268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.207 [2024-11-18 15:09:00.656638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.207 [2024-11-18 15:09:00.656668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.207 [2024-11-18 15:09:00.656681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.668574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.668604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.668616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.681327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.681355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.681367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.694234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.694264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.694275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.704992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.705021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.705032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.713849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.713877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.713889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.724890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.724920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.724932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.734781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.734810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.734822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.747948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.747977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.747989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.758715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.758744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.758755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.766988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.767029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.767040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.780552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.780581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.780594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.208 [2024-11-18 15:09:00.790622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.208 [2024-11-18 15:09:00.790666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.208 [2024-11-18 15:09:00.790677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.803657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.803698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.803709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.815515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.815544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.815557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.827876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.827906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.827917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.836395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.836435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.836446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.848617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.848645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.848657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.861045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.861098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.861110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.870273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.870303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.870324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.879946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.879989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.888893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.888922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.888934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.899310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.899358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.467 [2024-11-18 15:09:00.899369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.467 [2024-11-18 15:09:00.909254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.467 [2024-11-18 15:09:00.909285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.909297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:00.918929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:00.918958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.918969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:00.928978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:00.929007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.929018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:00.938285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:00.938327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.938339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:00.948072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:00.948113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.948123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:00.958527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:00.958568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.958579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:00.969880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:00.969908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.969919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:00.978046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:00.978076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.978089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:00.990341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:00.990369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:00.990380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:01.003193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:01.003223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:01.003235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:01.015204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:01.015234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:01.015245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:01.028007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:01.028038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:01.028049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:01.039444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:01.039472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:01.039483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.468 [2024-11-18 15:09:01.049697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.468 [2024-11-18 15:09:01.049725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.468 [2024-11-18 15:09:01.049736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.062078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.062118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.062129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.075417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.075445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.075456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.086416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.086458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.086469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.098639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.098668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.098680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.107065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.107094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.107106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.119910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.119939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.119950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.131812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.131854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.131865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.143912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.143941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.143952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.156532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.156561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.156572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.168690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.168719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.168730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.178075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.178105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.178116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.727 [2024-11-18 15:09:01.188478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.727 [2024-11-18 15:09:01.188506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.727 [2024-11-18 15:09:01.188517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.199084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.199113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.199124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.209689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.209718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.209729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.219709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.219738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.219750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.230439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.230480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.230491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.240285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.240330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.240356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.249807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.249836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.249851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.259602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.259631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.259642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.269512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.269540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.269551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.279466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.279507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.279518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.289290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.289338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.289351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.298075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.298105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.298116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.728 [2024-11-18 15:09:01.309341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.728 [2024-11-18 15:09:01.309391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.728 [2024-11-18 15:09:01.309413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.987 [2024-11-18 15:09:01.317853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.987 [2024-11-18 15:09:01.317883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.987 [2024-11-18 15:09:01.317895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.987 [2024-11-18 15:09:01.328710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.328752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.328763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.339045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.339088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.339100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.348720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.348750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.348761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.359572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.359612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.359624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.369625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.369653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.369665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.380075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.380118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.380129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.389684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.389713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.389725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.399536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.399565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.399578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.409945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.409974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.409985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.421956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.421985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.421995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.434583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.434611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.434622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.446934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.446965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.446976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.459912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.459941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.459952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.470151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.470192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.470203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.479416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.479444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.479455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.492200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.492228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.492239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.504699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.504740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.504751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.514866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.514908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.514920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.524472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.988 [2024-11-18 15:09:01.524500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.988 [2024-11-18 15:09:01.524512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.988 [2024-11-18 15:09:01.535077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.989 [2024-11-18 15:09:01.535118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.989 [2024-11-18 15:09:01.535129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.989 [2024-11-18 15:09:01.543870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.989 [2024-11-18 15:09:01.543898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.989 [2024-11-18 15:09:01.543909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.989 [2024-11-18 15:09:01.557675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.989 [2024-11-18 15:09:01.557705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.989 [2024-11-18 15:09:01.557715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.989 [2024-11-18 15:09:01.568386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:37.989 [2024-11-18 15:09:01.568426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.989 [2024-11-18 15:09:01.568438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.579962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.579991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.580003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.589911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.589941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.589952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.600286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.600326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.600339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.612101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.612130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.612142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.621465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.621494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.621506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.631278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.631327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.631342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.640939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.640968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.640979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.650426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.650454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.650464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.661157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.661186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.661197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.671784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.671825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.671836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.680610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.680640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.248 [2024-11-18 15:09:01.680652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.248 [2024-11-18 15:09:01.693648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.248 [2024-11-18 15:09:01.693677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.693687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.706718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.706759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.706770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.717157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.717187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.717199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.727995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.728038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.728049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.737440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.737469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.737481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.750097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.750126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.750137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.762432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.762474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.762485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.775124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.775154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.775165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.786109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.786152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.786163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.794279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.794309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.794333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.805897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.805938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.805950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.816299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.816349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.816361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.249 [2024-11-18 15:09:01.827487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.249 [2024-11-18 15:09:01.827516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.249 [2024-11-18 15:09:01.827529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.508 [2024-11-18 15:09:01.836661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.508 [2024-11-18 15:09:01.836691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-18 15:09:01.836702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.508 [2024-11-18 15:09:01.846736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.508 [2024-11-18 15:09:01.846765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-18 15:09:01.846777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.508 [2024-11-18 15:09:01.856725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.508 [2024-11-18 15:09:01.856769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-18 15:09:01.856780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.508 [2024-11-18 15:09:01.866773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.508 [2024-11-18 15:09:01.866814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.508 [2024-11-18 15:09:01.866825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.508 [2024-11-18 15:09:01.875913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.508 [2024-11-18 15:09:01.875941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.875953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.886570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.886611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.886622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.897899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.897928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.897940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.910399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.910440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.910451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.922189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.922219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.922229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.930217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.930245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.930256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.942816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.942845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.942856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.955572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.955601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.955612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.967443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.967484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.967495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.979714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.979756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.979767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.990363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.990391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.990402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:01.998590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:01.998618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:01.998631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.010939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.010968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.010979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.021617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.021647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.021657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.031286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.031330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.031344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.043709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.043738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.043749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.054149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.054178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.054190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.063697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.063726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.063737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.072910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.072953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.072964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.083448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.083478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.083490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.509 [2024-11-18 15:09:02.092310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.509 [2024-11-18 15:09:02.092359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.509 [2024-11-18 15:09:02.092371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.103191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.103221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.103232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.114278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.114307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.114329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.124480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.124509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.124521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.133566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.133595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.133606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.142743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.142771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.142783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.152360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.152419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.152431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.164695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.164725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.164736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.177020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.177089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.177104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.189235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.189281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.189295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.200888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.200918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.200931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.213464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.213493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.213505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.223880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.223909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.223921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.233357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.233416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.233443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.242847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.242876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.242887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.252176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.252205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.252216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.264359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.769 [2024-11-18 15:09:02.264396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.769 [2024-11-18 15:09:02.264409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.769 [2024-11-18 15:09:02.276720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.770 [2024-11-18 15:09:02.276761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.770 [2024-11-18 15:09:02.276773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.770 [2024-11-18 15:09:02.287426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.770 [2024-11-18 15:09:02.287456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.770 [2024-11-18 15:09:02.287468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.770 [2024-11-18 15:09:02.298046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.770 [2024-11-18 15:09:02.298076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.770 [2024-11-18 15:09:02.298087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.770 [2024-11-18 15:09:02.308020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.770 [2024-11-18 15:09:02.308050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.770 [2024-11-18 15:09:02.308061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.770 [2024-11-18 15:09:02.318427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.770 [2024-11-18 15:09:02.318472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.770 [2024-11-18 15:09:02.318484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.770 [2024-11-18 15:09:02.329203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.770 [2024-11-18 15:09:02.329248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.770 [2024-11-18 15:09:02.329262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.770 [2024-11-18 15:09:02.339573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.770 [2024-11-18 15:09:02.339600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.770 [2024-11-18 15:09:02.339611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.770 [2024-11-18 15:09:02.349819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:38.770 [2024-11-18 15:09:02.349847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.770 [2024-11-18 15:09:02.349859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.363260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.363302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.363323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.373524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.373566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.373577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.384668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.384697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.384709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.395527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.395556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.395567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.406534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.406562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.406575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.419164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.419192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.419203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.430931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.430960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.430971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.440571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.440599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.440613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.453570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.453600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.453612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.466913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.466943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.466955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.479848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.479894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.479905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.492948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.492978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.492989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.505663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.505692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.505702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.515063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.515093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.515104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.524958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.524989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.525000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.534815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.534859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.534871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.545694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.545724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.545735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.556692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.556722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.556734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.566372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.029 [2024-11-18 15:09:02.566399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.029 [2024-11-18 15:09:02.566410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.029 [2024-11-18 15:09:02.576989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.030 [2024-11-18 15:09:02.577018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.030 [2024-11-18 15:09:02.577029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.030 [2024-11-18 15:09:02.587810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16948d0) 00:22:39.030 [2024-11-18 15:09:02.587840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.030 [2024-11-18 15:09:02.587851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.030 00:22:39.030 Latency(us) 00:22:39.030 [2024-11-18T15:09:02.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.030 [2024-11-18T15:09:02.620Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:39.030 nvme0n1 : 2.00 23414.52 91.46 0.00 0.00 5462.45 2293.76 15966.95 00:22:39.030 [2024-11-18T15:09:02.620Z] =================================================================================================================== 00:22:39.030 [2024-11-18T15:09:02.620Z] Total : 23414.52 91.46 0.00 0.00 5462.45 2293.76 15966.95 00:22:39.030 0 00:22:39.030 15:09:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:39.287 15:09:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:39.287 15:09:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:39.287 | .driver_specific 00:22:39.287 | .nvme_error 00:22:39.287 | .status_code 00:22:39.287 | .command_transient_transport_error' 00:22:39.287 15:09:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:39.287 15:09:02 -- host/digest.sh@71 -- # (( 183 > 0 )) 00:22:39.287 15:09:02 -- host/digest.sh@73 -- # killprocess 97888 00:22:39.287 15:09:02 -- common/autotest_common.sh@936 -- # '[' -z 97888 ']' 00:22:39.287 15:09:02 -- common/autotest_common.sh@940 -- # kill -0 97888 00:22:39.545 15:09:02 -- common/autotest_common.sh@941 -- # uname 00:22:39.545 15:09:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.545 15:09:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97888 00:22:39.545 15:09:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:39.545 15:09:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:39.545 15:09:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97888' 00:22:39.545 killing process with pid 97888 00:22:39.545 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.545 00:22:39.545 Latency(us) 00:22:39.545 [2024-11-18T15:09:03.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.545 [2024-11-18T15:09:03.135Z] =================================================================================================================== 00:22:39.545 [2024-11-18T15:09:03.135Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.545 15:09:02 -- common/autotest_common.sh@955 -- # kill 97888 00:22:39.545 15:09:02 -- common/autotest_common.sh@960 -- # wait 97888 00:22:39.802 15:09:03 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:39.802 15:09:03 -- host/digest.sh@54 -- # local rw bs qd 00:22:39.802 15:09:03 -- host/digest.sh@56 -- # rw=randread 00:22:39.802 15:09:03 -- host/digest.sh@56 -- # bs=131072 00:22:39.802 15:09:03 -- host/digest.sh@56 -- # qd=16 00:22:39.802 15:09:03 -- host/digest.sh@58 -- # bperfpid=97976 00:22:39.802 15:09:03 -- host/digest.sh@60 -- # waitforlisten 97976 /var/tmp/bperf.sock 00:22:39.802 15:09:03 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:39.802 15:09:03 -- common/autotest_common.sh@829 -- # '[' -z 97976 ']' 00:22:39.802 15:09:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.802 15:09:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.802 15:09:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.802 15:09:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.802 15:09:03 -- common/autotest_common.sh@10 -- # set +x 00:22:39.802 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:39.802 Zero copy mechanism will not be used. 00:22:39.802 [2024-11-18 15:09:03.208829] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:39.802 [2024-11-18 15:09:03.208917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97976 ] 00:22:39.802 [2024-11-18 15:09:03.333315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.060 [2024-11-18 15:09:03.410230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.625 15:09:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.625 15:09:04 -- common/autotest_common.sh@862 -- # return 0 00:22:40.625 15:09:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:40.625 15:09:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:40.883 15:09:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:40.883 15:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.883 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:22:40.883 15:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.883 15:09:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.883 15:09:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.141 nvme0n1 00:22:41.141 15:09:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:41.141 15:09:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.141 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:22:41.141 15:09:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.141 15:09:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:41.141 15:09:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:41.401 Zero copy mechanism will not be used. 00:22:41.401 Running I/O for 2 seconds... 00:22:41.401 [2024-11-18 15:09:04.763755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.401 [2024-11-18 15:09:04.763808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.401 [2024-11-18 15:09:04.763822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.767460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.767489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.767501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.771392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.771423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.771434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.775377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.775405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.775416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.779276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.779328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.779341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.782697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.782739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.782751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.786447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.786476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.786487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.790446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.790475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.790487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.794215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.794257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.794268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.798218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.798247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.798258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.801585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.801626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.801637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.805673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.805701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.805712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.809422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.809451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.809465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.812211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.812241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.812262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.815995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.816024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.816036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.819636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.819665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.819675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.822581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.822609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.822622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.825987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.826028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.826038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.829759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.829787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.829799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.833761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.833788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.833799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.837265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.837296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.837307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.840505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.840534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.840544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.844100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.844142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.844152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.847342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.847382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.847393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.851339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.851378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.851388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.855540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.855568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.855580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.858759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.858788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.402 [2024-11-18 15:09:04.858798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.402 [2024-11-18 15:09:04.861980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.402 [2024-11-18 15:09:04.862010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.862021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.865611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.865652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.865663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.868792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.868820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.868831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.872408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.872457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.872471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.876693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.876723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.876735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.880646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.880675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.880687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.883964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.883994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.884006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.887930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.887958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.887969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.891211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.891240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.891252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.894258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.894287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.894297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.898594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.898624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.898635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.902218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.902248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.902259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.905437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.905483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.905494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.910051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.910083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.910094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.913362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.913607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.913700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.917954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.918056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.918160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.922755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.922895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.922969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.927535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.927622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.927677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.931359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.931458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.931513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.935579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.935677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.935770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.939696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.939795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.939890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.944121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.944221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.944276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.947957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.948045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.948135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.951554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.951642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.951698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.954870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.954972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.955047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.958823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.958911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.958966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.962614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.403 [2024-11-18 15:09:04.962702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.403 [2024-11-18 15:09:04.962778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.403 [2024-11-18 15:09:04.965995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.404 [2024-11-18 15:09:04.966084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.404 [2024-11-18 15:09:04.966157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.404 [2024-11-18 15:09:04.969183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.404 [2024-11-18 15:09:04.969283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.404 [2024-11-18 15:09:04.969372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.404 [2024-11-18 15:09:04.973131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.404 [2024-11-18 15:09:04.973247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.404 [2024-11-18 15:09:04.973330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.404 [2024-11-18 15:09:04.976727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.404 [2024-11-18 15:09:04.976816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.404 [2024-11-18 15:09:04.976834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.404 [2024-11-18 15:09:04.980213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.404 [2024-11-18 15:09:04.980245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.404 [2024-11-18 15:09:04.980257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.404 [2024-11-18 15:09:04.983736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.404 [2024-11-18 15:09:04.983766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.404 [2024-11-18 15:09:04.983778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:04.987671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:04.987714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:04.987741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:04.991710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:04.991740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:04.991750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:04.994877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:04.994907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:04.994917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:04.998805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:04.998835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:04.998845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.002279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.002309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.002330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.006118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.006146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.006157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.009239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.009281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.009292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.012770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.012799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.012809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.016113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.016143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.016153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.019635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.019664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.019675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.022792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.022820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.022830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.026686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.026715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.026726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.030141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.030169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.030181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.033642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.033671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.033682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.036335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.036362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.036373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.040445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.040473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.040484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.043861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.043890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.043900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.047305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.047345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.047363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.050516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.050544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.050555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.054424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.054453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.054465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.058356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.058396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.058406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.062423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.062461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.062472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.066427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.066465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.665 [2024-11-18 15:09:05.066475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.665 [2024-11-18 15:09:05.070286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.665 [2024-11-18 15:09:05.070323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.070338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.073323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.073361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.073372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.076415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.076443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.076453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.079867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.079896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.079907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.083399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.083428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.083438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.087138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.087167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.087177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.090177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.090205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.090216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.093693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.093722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.093733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.097149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.097178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.097191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.100895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.100922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.100933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.104540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.104569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.104580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.108046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.108075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.108086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.111435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.111464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.111474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.115000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.115029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.115040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.118653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.118694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.118705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.122515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.122544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.122555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.125865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.125895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.125906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.128991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.129019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.129030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.131860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.131889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.131899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.135742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.135771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.135783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.139199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.139228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.139239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.142483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.142524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.142536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.145473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.145502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.145512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.149245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.149274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.149285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.152432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.152459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.152469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.155961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.155990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.156001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.666 [2024-11-18 15:09:05.159541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.666 [2024-11-18 15:09:05.159569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.666 [2024-11-18 15:09:05.159580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.162496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.162525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.162536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.166220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.166249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.166260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.169724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.169753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.169764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.172871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.172899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.172911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.176124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.176152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.176163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.179442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.179471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.179482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.182863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.182893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.182904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.186629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.186669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.186679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.190206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.190235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.190245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.194507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.194548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.194559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.198177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.198206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.198232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.201412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.201472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.201483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.205135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.205178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.205190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.208540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.208568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.208578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.211892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.211921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.211932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.215688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.215717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.215727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.219090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.219119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.219129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.222914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.222942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.222952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.226233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.226263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.226273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.229907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.229936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.229946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.232912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.232942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.232952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.236436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.236463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.667 [2024-11-18 15:09:05.236474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.667 [2024-11-18 15:09:05.240369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.667 [2024-11-18 15:09:05.240396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.668 [2024-11-18 15:09:05.240407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.668 [2024-11-18 15:09:05.243928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.668 [2024-11-18 15:09:05.243958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.668 [2024-11-18 15:09:05.243968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.668 [2024-11-18 15:09:05.247722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.668 [2024-11-18 15:09:05.247751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.668 [2024-11-18 15:09:05.247761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.251171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.251201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.251212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.254874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.254902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.254912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.258723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.258768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.258779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.262102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.262131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.262142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.265288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.265339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.265352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.268330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.268357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.268367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.271841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.271869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.271880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.275042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.275071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.275082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.278573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.278602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.278612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.282397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.282425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.282435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.285940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.285968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.285979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.289756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.289784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.289795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.293025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.293053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.293072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.296041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.296069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.296080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.299411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.299440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.299451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.303175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.303204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.303214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.306560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.306601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.306611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.309902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.309931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.309941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.313654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.313683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.313693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.317226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.317267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.317278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.321039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.321073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.321092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.324725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.324755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.324765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.328594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.328623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.328634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.332442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.332469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.332480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.336093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.929 [2024-11-18 15:09:05.336121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.929 [2024-11-18 15:09:05.336132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.929 [2024-11-18 15:09:05.339983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.340010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.340020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.342956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.342984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.342995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.345868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.345896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.345906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.349557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.349598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.349609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.353429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.353457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.353468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.356888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.356915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.356926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.360662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.360690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.360700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.364149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.364178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.364188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.367698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.367727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.367737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.371430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.371457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.371467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.375197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.375224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.375235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.378568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.378597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.378608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.382160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.382190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.382201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.385607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.385636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.385646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.389479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.389522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.389534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.393032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.393067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.393090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.396466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.396495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.396505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.400325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.400352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.400362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.403223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.403252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.403263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.407039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.407068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.407078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.410918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.410947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.410958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.414448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.414490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.414501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.417479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.417507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.417519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.420431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.420459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.420470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.423843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.423871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.423882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.427547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.427576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.427586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.930 [2024-11-18 15:09:05.430827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.930 [2024-11-18 15:09:05.430856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.930 [2024-11-18 15:09:05.430866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.434354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.434381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.434393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.438063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.438090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.438100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.441562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.441590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.441601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.445752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.445778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.445789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.449582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.449609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.449619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.453013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.453041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.453052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.457557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.457594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.457616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.461359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.461401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.461414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.465148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.465178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.465190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.468574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.468602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.468612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.472023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.472052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.472063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.475827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.475855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.475866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.479173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.479201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.479212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.482416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.482445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.482455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.486079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.486108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.486119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.489755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.489783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.489793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.493894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.493922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.493933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.497666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.497708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.497719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.501794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.501822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.501833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.505207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.505250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.505261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.508244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.508271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.508281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.931 [2024-11-18 15:09:05.511285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:41.931 [2024-11-18 15:09:05.511324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.931 [2024-11-18 15:09:05.511336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.515213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.515257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.515268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.518706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.518734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.518745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.522821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.522848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.522859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.526983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.527010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.527021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.531174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.531201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.531212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.534904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.534931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.534942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.538959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.538987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.538997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.542453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.542482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.542493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.546274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.546304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.546324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.549990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.550019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.550029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.553452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.553481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.193 [2024-11-18 15:09:05.553491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.193 [2024-11-18 15:09:05.556410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.193 [2024-11-18 15:09:05.556438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.556449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.559552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.559581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.559593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.563476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.563504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.563515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.567009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.567039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.567050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.571153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.571182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.571193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.575307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.575357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.575368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.578247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.578275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.578287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.581884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.581913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.581924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.585433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.585461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.585471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.588799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.588827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.588837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.592425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.592454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.592467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.595923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.595952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.595963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.599242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.599271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.599281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.602788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.602817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.606608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.606637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.606648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.610325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.610352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.610362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.614302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.614337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.614349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.618129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.618157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.618167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.621864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.621892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.621903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.625722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.625749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.625759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.629520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.629547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.629558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.633404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.633431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.633441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.637164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.637191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.194 [2024-11-18 15:09:05.637202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.194 [2024-11-18 15:09:05.641264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.194 [2024-11-18 15:09:05.641303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.641324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.644872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.644899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.644909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.648363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.648392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.648403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.651073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.651101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.651111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.654500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.654528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.654539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.657875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.657903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.657915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.661651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.661679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.661690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.664986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.665014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.665025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.668733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.668762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.668772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.671812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.671842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.671853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.675447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.675476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.675488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.678955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.678983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.678993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.683145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.683175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.683186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.686455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.686483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.686494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.690540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.690568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.690578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.694031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.694059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.694070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.698232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.698260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.698270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.701402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.701430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.701441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.704987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.705016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.705027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.708837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.708866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.708876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.712249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.712278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.712290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.716239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.716269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.716280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.719600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.719629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.719639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.722372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.722400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.195 [2024-11-18 15:09:05.722411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.195 [2024-11-18 15:09:05.726293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.195 [2024-11-18 15:09:05.726331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.726343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.729435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.729464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.729475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.732980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.733008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.733018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.736269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.736298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.736308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.740028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.740058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.740069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.743562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.743592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.743602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.747250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.747291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.747302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.751289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.751338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.751353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.754655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.754695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.754706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.758653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.758692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.758703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.763135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.763163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.763175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.767495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.767523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.767534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.771030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.771057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.771067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.196 [2024-11-18 15:09:05.774989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.196 [2024-11-18 15:09:05.775018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.196 [2024-11-18 15:09:05.775029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.778705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.778755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.778765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.782829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.782858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.782869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.786235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.786265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.786277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.789731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.789760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.789770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.793299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.793342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.793354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.796946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.796974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.796985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.800592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.800621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.800631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.803808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.803837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.803848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.807354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.807382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.807393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.811695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.811724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.811734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.815214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.815242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.815253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.818837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.818866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.818877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.822518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.822546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.822558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.825658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.825688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.825698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.828587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.828615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.828625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.832222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.832263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.832274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.835618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.835646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.835659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.839530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.839560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.839572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.842620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.842648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.842660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.845760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.845789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.845800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.849269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.458 [2024-11-18 15:09:05.849310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.458 [2024-11-18 15:09:05.849332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.458 [2024-11-18 15:09:05.852723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.852750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.852762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.856480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.856508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.856519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.859716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.859745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.859756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.863153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.863181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.863192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.866924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.866952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.866964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.869787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.869814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.869825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.873793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.873822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.873832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.877746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.877776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.877787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.881192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.881234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.881246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.884937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.884966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.884977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.888485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.888513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.888523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.892434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.892462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.892472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.896495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.896523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.896533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.900289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.900328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.900340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.903776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.903804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.903814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.907204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.907233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.907244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.910341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.910368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.910379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.913793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.913822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.913833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.917730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.917759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.917770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.922397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.922425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.922436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.925943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.925973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.925983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.929571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.929600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.929611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.933805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.933834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.933845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.938604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.938634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.938644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.942430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.942459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.942469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.946607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.946638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.946649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.949715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.459 [2024-11-18 15:09:05.949743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.459 [2024-11-18 15:09:05.949754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.459 [2024-11-18 15:09:05.953460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.953489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.953501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.957186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.957215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.957226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.960954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.960982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.960993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.964505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.964534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.964545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.967491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.967519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.967532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.971107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.971136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.971149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.974690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.974719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.974731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.978372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.978413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.978423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.981521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.981550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.981562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.985079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.985123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.985135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.988830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.988859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.988869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.991935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.991976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.991987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.995529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.995569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.995580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:05.999844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:05.999886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:05.999896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.002920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.002961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.002971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.006269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.006297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.006309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.010532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.010559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.010573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.014463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.014490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.014504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.018423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.018451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.018463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.022099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.022125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.022135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.025901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.025929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.025940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.029872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.029901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.029912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.034070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.034098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.034110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.037489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.037528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.037540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.460 [2024-11-18 15:09:06.041345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.460 [2024-11-18 15:09:06.041384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.460 [2024-11-18 15:09:06.041396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.045486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.045513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.045526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.049356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.049385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.049411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.053246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.053277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.053288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.057589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.057630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.057641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.060946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.060989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.061000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.064512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.064543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.064554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.067792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.067822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.067833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.070889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.070918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.070930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.074803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.725 [2024-11-18 15:09:06.074844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.725 [2024-11-18 15:09:06.074866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.725 [2024-11-18 15:09:06.078185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.078213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.078224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.081835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.081876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.081887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.085526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.085555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.085565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.089012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.089041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.089052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.092880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.092921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.092932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.096155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.096184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.096195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.099303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.099352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.099363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.103015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.103044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.103055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.106811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.106840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.106851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.109729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.109770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.109781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.113604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.113632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.113643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.116728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.116756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.116767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.120554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.120582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.120593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.124432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.124472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.124483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.127993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.128021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.128032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.132170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.132198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.132209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.135962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.135990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.136001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.139162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.139190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.139201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.143024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.143052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.143063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.146258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.146287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.146297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.149727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.149755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.149765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.152896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.726 [2024-11-18 15:09:06.152924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.726 [2024-11-18 15:09:06.152935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.726 [2024-11-18 15:09:06.156277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.156305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.156327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.159824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.159853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.159863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.163185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.163214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.163225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.166726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.166755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.166765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.170928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.170956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.170967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.174619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.174647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.174658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.178365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.178392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.178402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.181122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.181153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.181164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.184990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.185018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.185029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.188652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.188681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.188692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.191920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.191949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.191960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.195016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.195045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.195057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.198488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.198518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.198528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.202307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.202344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.202355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.206298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.206334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.206346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.209685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.209714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.209725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.212962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.212991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.213002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.215628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.215658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.215668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.219368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.219397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.219407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.222777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.222806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.222816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.226203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.226233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.226243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.229700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.229730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.727 [2024-11-18 15:09:06.229741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.727 [2024-11-18 15:09:06.233359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.727 [2024-11-18 15:09:06.233389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.233416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.236752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.236780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.236791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.240601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.240643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.240653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.243908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.243937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.243947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.247150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.247178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.247189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.250751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.250780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.250792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.253818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.253847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.253858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.257549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.257579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.257589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.260342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.260369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.260380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.263759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.263788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.263799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.267246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.267275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.267286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.270365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.270387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.270397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.273920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.273949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.273959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.277806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.277834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.277845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.281261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.281291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.281302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.285007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.285035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.285045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.289304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.289343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.289355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.292506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.292534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.292544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.295361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.295389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.295399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.298796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.298825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.298835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.303127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.303156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.303169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.728 [2024-11-18 15:09:06.306942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:42.728 [2024-11-18 15:09:06.306986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.728 [2024-11-18 15:09:06.306997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.311114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.311145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.311156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.315248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.315280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.315291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.319664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.319696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.319708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.324038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.324070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.324082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.328481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.328523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.328536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.332133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.332165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.332177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.336118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.336149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.336160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.339941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.339969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.339979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.344050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.344077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.344088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.347447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.347476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.347486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.351526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.351553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.351564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.355736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.355764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.355776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.359124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.359152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.359162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.362020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.362048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.362059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.366037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.366065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.366075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.369824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.369852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.369863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.373556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.373584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.373595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.376937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.376965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.376976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.380454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.380484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.380495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.383660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.383689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.383700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.387460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.387489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.387500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.390704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.390733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.390744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.393660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.393689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.393699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.397165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.397195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.397205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.400958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.400987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.400997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.404717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.043 [2024-11-18 15:09:06.404746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.043 [2024-11-18 15:09:06.404757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.043 [2024-11-18 15:09:06.408486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.408515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.408528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.412348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.412375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.412386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.415934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.415963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.415973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.419819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.419847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.419857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.422500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.422529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.422541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.426221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.426250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.426261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.429746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.429774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.429785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.434007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.434034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.434044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.437541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.437570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.437580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.441701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.441728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.441739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.445958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.445987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.445998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.449125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.449155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.449167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.452131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.452159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.452170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.455714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.455742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.455753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.459028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.459056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.459067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.462783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.462811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.462822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.466704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.466733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.466744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.469941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.469970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.469980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.473441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.473471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.473482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.476797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.476826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.476836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.479992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.480021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.480032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.483347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.483375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.483386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.486408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.486437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.486447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.489919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.489946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.489957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.494040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.494067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.494077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.498447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.498476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.498489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.044 [2024-11-18 15:09:06.501893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.044 [2024-11-18 15:09:06.501923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.044 [2024-11-18 15:09:06.501934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.505222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.505255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.505267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.508781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.508809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.508820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.511527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.511555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.511566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.515007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.515035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.515046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.518648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.518677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.518687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.521845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.521874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.521885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.525080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.525124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.525136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.528258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.528286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.528297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.531370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.531397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.531408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.534949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.534977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.534988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.538570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.538597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.538609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.542523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.542550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.542563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.546192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.546232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.546242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.550135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.550176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.550187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.553674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.553716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.553727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.557710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.557739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.557751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.560779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.560820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.560831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.564443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.564472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.564483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.568351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.568392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.568403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.572259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.572288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.572299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.575968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.575996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.576006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.579847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.579875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.579885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.583350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.583378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.583388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.587440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.587467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.587477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.591340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.591368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.591378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.595511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.595541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.595552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.045 [2024-11-18 15:09:06.599313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.045 [2024-11-18 15:09:06.599351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.045 [2024-11-18 15:09:06.599364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.046 [2024-11-18 15:09:06.603107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.046 [2024-11-18 15:09:06.603137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.046 [2024-11-18 15:09:06.603148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.306 [2024-11-18 15:09:06.607445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.306 [2024-11-18 15:09:06.607475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.306 [2024-11-18 15:09:06.607486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.306 [2024-11-18 15:09:06.611175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.306 [2024-11-18 15:09:06.611204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.306 [2024-11-18 15:09:06.611215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.306 [2024-11-18 15:09:06.615165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.306 [2024-11-18 15:09:06.615194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.306 [2024-11-18 15:09:06.615206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.306 [2024-11-18 15:09:06.618384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.306 [2024-11-18 15:09:06.618411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.306 [2024-11-18 15:09:06.618422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.306 [2024-11-18 15:09:06.621502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.306 [2024-11-18 15:09:06.621528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.306 [2024-11-18 15:09:06.621539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.306 [2024-11-18 15:09:06.625285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.306 [2024-11-18 15:09:06.625326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.306 [2024-11-18 15:09:06.625339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.306 [2024-11-18 15:09:06.628925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.306 [2024-11-18 15:09:06.628953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.306 [2024-11-18 15:09:06.628964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.306 [2024-11-18 15:09:06.633041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.306 [2024-11-18 15:09:06.633114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.633126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.636709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.636740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.636767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.641113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.641157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.641169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.644831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.644862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.644874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.648929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.648958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.648968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.652549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.652577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.652587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.655493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.655537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.655547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.658907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.658936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.658947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.662240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.662268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.662279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.666130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.666158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.666168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.669729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.669758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.669769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.672812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.672840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.672851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.676391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.676419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.676430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.680226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.680256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.680266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.684328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.684357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.684369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.688011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.688041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.688052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.690822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.690851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.690862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.695103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.695131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.695142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.698396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.698424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.698436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.702212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.702241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.702251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.705761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.705789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.705800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.709145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.709187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.709199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.712777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.712805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.712816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.716141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.716170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.716181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.719535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.719564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.719575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.722752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.722781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.722792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.726422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.307 [2024-11-18 15:09:06.726451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.307 [2024-11-18 15:09:06.726461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.307 [2024-11-18 15:09:06.729756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.729784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.729794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.308 [2024-11-18 15:09:06.733049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.733085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.733113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.308 [2024-11-18 15:09:06.736587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.736616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.736626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.308 [2024-11-18 15:09:06.740289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.740328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.740340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.308 [2024-11-18 15:09:06.743877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.743919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.743929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.308 [2024-11-18 15:09:06.747191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.747220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.747231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.308 [2024-11-18 15:09:06.750054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.750094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.750104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.308 [2024-11-18 15:09:06.754021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.754061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.754072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.308 [2024-11-18 15:09:06.757887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x68ad10) 00:22:43.308 [2024-11-18 15:09:06.757915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.308 [2024-11-18 15:09:06.757926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.308 00:22:43.308 Latency(us) 00:22:43.308 [2024-11-18T15:09:06.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.308 [2024-11-18T15:09:06.898Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:43.308 nvme0n1 : 2.00 8571.80 1071.47 0.00 0.00 1863.78 543.65 5421.61 00:22:43.308 [2024-11-18T15:09:06.898Z] =================================================================================================================== 00:22:43.308 [2024-11-18T15:09:06.898Z] Total : 8571.80 1071.47 0.00 0.00 1863.78 543.65 5421.61 00:22:43.308 0 00:22:43.308 15:09:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:43.308 15:09:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:43.308 15:09:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:43.308 15:09:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:43.308 | .driver_specific 00:22:43.308 | .nvme_error 00:22:43.308 | .status_code 00:22:43.308 | .command_transient_transport_error' 00:22:43.567 15:09:07 -- host/digest.sh@71 -- # (( 553 > 0 )) 00:22:43.567 15:09:07 -- host/digest.sh@73 -- # killprocess 97976 00:22:43.567 15:09:07 -- common/autotest_common.sh@936 -- # '[' -z 97976 ']' 00:22:43.567 15:09:07 -- common/autotest_common.sh@940 -- # kill -0 97976 00:22:43.567 15:09:07 -- common/autotest_common.sh@941 -- # uname 00:22:43.567 15:09:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.567 15:09:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97976 00:22:43.567 killing process with pid 97976 00:22:43.567 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.567 00:22:43.567 Latency(us) 00:22:43.567 [2024-11-18T15:09:07.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.567 [2024-11-18T15:09:07.157Z] =================================================================================================================== 00:22:43.567 [2024-11-18T15:09:07.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.567 15:09:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:43.567 15:09:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:43.567 15:09:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97976' 00:22:43.567 15:09:07 -- common/autotest_common.sh@955 -- # kill 97976 00:22:43.567 15:09:07 -- common/autotest_common.sh@960 -- # wait 97976 00:22:43.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:43.827 15:09:07 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:43.827 15:09:07 -- host/digest.sh@54 -- # local rw bs qd 00:22:43.827 15:09:07 -- host/digest.sh@56 -- # rw=randwrite 00:22:43.827 15:09:07 -- host/digest.sh@56 -- # bs=4096 00:22:43.827 15:09:07 -- host/digest.sh@56 -- # qd=128 00:22:43.827 15:09:07 -- host/digest.sh@58 -- # bperfpid=98070 00:22:43.827 15:09:07 -- host/digest.sh@60 -- # waitforlisten 98070 /var/tmp/bperf.sock 00:22:43.827 15:09:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:43.827 15:09:07 -- common/autotest_common.sh@829 -- # '[' -z 98070 ']' 00:22:43.827 15:09:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:43.827 15:09:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.827 15:09:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:43.827 15:09:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.827 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:22:43.827 [2024-11-18 15:09:07.397836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:43.827 [2024-11-18 15:09:07.397921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98070 ] 00:22:44.086 [2024-11-18 15:09:07.530428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.086 [2024-11-18 15:09:07.592914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.022 15:09:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.022 15:09:08 -- common/autotest_common.sh@862 -- # return 0 00:22:45.022 15:09:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.022 15:09:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:45.022 15:09:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:45.022 15:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.022 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:45.022 15:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.022 15:09:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.022 15:09:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.281 nvme0n1 00:22:45.281 15:09:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:45.281 15:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.281 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:45.540 15:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.540 15:09:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:45.540 15:09:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:45.540 Running I/O for 2 seconds... 00:22:45.540 [2024-11-18 15:09:09.010877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eea00 00:22:45.540 [2024-11-18 15:09:09.011939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.011985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.022242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f6cc8 00:22:45.540 [2024-11-18 15:09:09.023818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.023859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.031531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e12d8 00:22:45.540 [2024-11-18 15:09:09.032743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.032773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.041405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f7100 00:22:45.540 [2024-11-18 15:09:09.042340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.042373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.049586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e7818 00:22:45.540 [2024-11-18 15:09:09.050263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.050291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.058991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f1868 00:22:45.540 [2024-11-18 15:09:09.059481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.059518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.066907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f4298 00:22:45.540 [2024-11-18 15:09:09.067108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.067127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.075980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eaab8 00:22:45.540 [2024-11-18 15:09:09.076571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.076599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.085932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f31b8 00:22:45.540 [2024-11-18 15:09:09.086453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.086475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.095557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eea00 00:22:45.540 [2024-11-18 15:09:09.096734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.096762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.104548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ef6a8 00:22:45.540 [2024-11-18 15:09:09.105044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.105093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.113113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f4f40 00:22:45.540 [2024-11-18 15:09:09.113933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.113961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:45.540 [2024-11-18 15:09:09.121537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ff3c8 00:22:45.540 [2024-11-18 15:09:09.122230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.540 [2024-11-18 15:09:09.122257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.131943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fe2e8 00:22:45.799 [2024-11-18 15:09:09.133094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.133124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.141522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ddc00 00:22:45.799 [2024-11-18 15:09:09.142253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.142280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.150732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fd208 00:22:45.799 [2024-11-18 15:09:09.151849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.151876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.158991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e7c50 00:22:45.799 [2024-11-18 15:09:09.159677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.159704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.168029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ddc00 00:22:45.799 [2024-11-18 15:09:09.168330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.168354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.177115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f81e0 00:22:45.799 [2024-11-18 15:09:09.177810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.177837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.186222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190df550 00:22:45.799 [2024-11-18 15:09:09.187229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.187257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.195514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f96f8 00:22:45.799 [2024-11-18 15:09:09.195943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.195978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.206569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e1b48 00:22:45.799 [2024-11-18 15:09:09.207630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.207657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.213303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f9b30 00:22:45.799 [2024-11-18 15:09:09.213505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.213524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.222802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f9b30 00:22:45.799 [2024-11-18 15:09:09.223666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.223693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.231947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f9b30 00:22:45.799 [2024-11-18 15:09:09.232800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.232827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.240855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eea00 00:22:45.799 [2024-11-18 15:09:09.241055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.241082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.250734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e49b0 00:22:45.799 [2024-11-18 15:09:09.251717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.251745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.259911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4140 00:22:45.799 [2024-11-18 15:09:09.260350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.260382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.269234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f6cc8 00:22:45.799 [2024-11-18 15:09:09.269966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.269993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.277269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fe720 00:22:45.799 [2024-11-18 15:09:09.277492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.277511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.286674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e2c28 00:22:45.799 [2024-11-18 15:09:09.287543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.287569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.296107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f0788 00:22:45.799 [2024-11-18 15:09:09.296521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.296545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.305378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e6fa8 00:22:45.799 [2024-11-18 15:09:09.305854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.305879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.314455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f2510 00:22:45.799 [2024-11-18 15:09:09.314897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.314924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.323503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e88f8 00:22:45.799 [2024-11-18 15:09:09.323964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.323988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.332492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190df550 00:22:45.799 [2024-11-18 15:09:09.332884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.332908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.341519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e73e0 00:22:45.799 [2024-11-18 15:09:09.341888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.341912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.350543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e6fa8 00:22:45.799 [2024-11-18 15:09:09.350938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.350963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.359586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e1710 00:22:45.799 [2024-11-18 15:09:09.359942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.799 [2024-11-18 15:09:09.359967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:45.799 [2024-11-18 15:09:09.368843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fbcf0 00:22:45.799 [2024-11-18 15:09:09.369528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.800 [2024-11-18 15:09:09.369555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:45.800 [2024-11-18 15:09:09.378210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190df988 00:22:45.800 [2024-11-18 15:09:09.378771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.800 [2024-11-18 15:09:09.378798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:46.058 [2024-11-18 15:09:09.387535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ee5c8 00:22:46.058 [2024-11-18 15:09:09.388742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.058 [2024-11-18 15:09:09.388770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:46.058 [2024-11-18 15:09:09.397041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e95a0 00:22:46.058 [2024-11-18 15:09:09.397803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.058 [2024-11-18 15:09:09.397830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:46.058 [2024-11-18 15:09:09.406422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ed0b0 00:22:46.058 [2024-11-18 15:09:09.407205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.058 [2024-11-18 15:09:09.407245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:46.058 [2024-11-18 15:09:09.415716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e5ec8 00:22:46.058 [2024-11-18 15:09:09.416928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.058 [2024-11-18 15:09:09.416955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:46.058 [2024-11-18 15:09:09.425244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e5220 00:22:46.058 [2024-11-18 15:09:09.425914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.058 [2024-11-18 15:09:09.425940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.433181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4578 00:22:46.059 [2024-11-18 15:09:09.434198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.434225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.442429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4578 00:22:46.059 [2024-11-18 15:09:09.443327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.443351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.451400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4578 00:22:46.059 [2024-11-18 15:09:09.452449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.452475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.460928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eee38 00:22:46.059 [2024-11-18 15:09:09.461825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.461851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.470075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e95a0 00:22:46.059 [2024-11-18 15:09:09.471502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.471542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.480814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e7818 00:22:46.059 [2024-11-18 15:09:09.481475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.481523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.491187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f7100 00:22:46.059 [2024-11-18 15:09:09.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.491977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.501006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fcdd0 00:22:46.059 [2024-11-18 15:09:09.501761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.501790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.510907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e9e10 00:22:46.059 [2024-11-18 15:09:09.511582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.511608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.520123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f5378 00:22:46.059 [2024-11-18 15:09:09.520836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.520864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.529650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fe720 00:22:46.059 [2024-11-18 15:09:09.530400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.530426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.538990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f4b08 00:22:46.059 [2024-11-18 15:09:09.539725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.539753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.548209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e88f8 00:22:46.059 [2024-11-18 15:09:09.548882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.548910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.558142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4578 00:22:46.059 [2024-11-18 15:09:09.559184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.559211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.567979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4578 00:22:46.059 [2024-11-18 15:09:09.568929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.568967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.577465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fc128 00:22:46.059 [2024-11-18 15:09:09.577889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.577913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.587060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f0788 00:22:46.059 [2024-11-18 15:09:09.587951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.587979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.596589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190dece0 00:22:46.059 [2024-11-18 15:09:09.598056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.598083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.605770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e23b8 00:22:46.059 [2024-11-18 15:09:09.606717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.606743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.615206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ed0b0 00:22:46.059 [2024-11-18 15:09:09.615478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.615502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.624591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ee190 00:22:46.059 [2024-11-18 15:09:09.625232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.625260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.633728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ddc00 00:22:46.059 [2024-11-18 15:09:09.634094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.634119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:46.059 [2024-11-18 15:09:09.643245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e3d08 00:22:46.059 [2024-11-18 15:09:09.643739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.059 [2024-11-18 15:09:09.643767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.653285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e95a0 00:22:46.319 [2024-11-18 15:09:09.654103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.654130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.662592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f20d8 00:22:46.319 [2024-11-18 15:09:09.663243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.663270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.670917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f8e88 00:22:46.319 [2024-11-18 15:09:09.671759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.671787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.679847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f2510 00:22:46.319 [2024-11-18 15:09:09.680050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.680069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.688886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e0a68 00:22:46.319 [2024-11-18 15:09:09.689254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.689280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.697919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e3d08 00:22:46.319 [2024-11-18 15:09:09.698226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.698251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.706905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ea680 00:22:46.319 [2024-11-18 15:09:09.707189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.707213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.716008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fd640 00:22:46.319 [2024-11-18 15:09:09.716265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.716289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.724932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fb048 00:22:46.319 [2024-11-18 15:09:09.725193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.725214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.733819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ec840 00:22:46.319 [2024-11-18 15:09:09.734046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.734064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.742791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ebb98 00:22:46.319 [2024-11-18 15:09:09.743090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.743114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.751866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f9b30 00:22:46.319 [2024-11-18 15:09:09.752824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.752859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.761102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f9b30 00:22:46.319 [2024-11-18 15:09:09.762192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.319 [2024-11-18 15:09:09.762221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:46.319 [2024-11-18 15:09:09.770690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f3a28 00:22:46.320 [2024-11-18 15:09:09.771527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.771562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.780216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190df118 00:22:46.320 [2024-11-18 15:09:09.780695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.780727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.789364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fd640 00:22:46.320 [2024-11-18 15:09:09.790018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.790046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.797414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eea00 00:22:46.320 [2024-11-18 15:09:09.797473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.797492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.806613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ea680 00:22:46.320 [2024-11-18 15:09:09.807030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.807054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.817145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f6458 00:22:46.320 [2024-11-18 15:09:09.818468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.818494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.825238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fbcf0 00:22:46.320 [2024-11-18 15:09:09.825951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.825979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.834671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190de038 00:22:46.320 [2024-11-18 15:09:09.835703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.835738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.846225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e12d8 00:22:46.320 [2024-11-18 15:09:09.847384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.847409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.855390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4140 00:22:46.320 [2024-11-18 15:09:09.855924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.855950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.865395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ee5c8 00:22:46.320 [2024-11-18 15:09:09.866279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.866304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.874210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ddc00 00:22:46.320 [2024-11-18 15:09:09.874547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.874572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.883884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e1710 00:22:46.320 [2024-11-18 15:09:09.884963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.884991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.893637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e0a68 00:22:46.320 [2024-11-18 15:09:09.894714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.894757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:46.320 [2024-11-18 15:09:09.902921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f1ca0 00:22:46.320 [2024-11-18 15:09:09.903462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.320 [2024-11-18 15:09:09.903488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.911218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e84c0 00:22:46.580 [2024-11-18 15:09:09.911416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.911436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.921015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f4b08 00:22:46.580 [2024-11-18 15:09:09.922075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.922103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.930349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ff3c8 00:22:46.580 [2024-11-18 15:09:09.930850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.930877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.939523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f4b08 00:22:46.580 [2024-11-18 15:09:09.940736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.940763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.949239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fc128 00:22:46.580 [2024-11-18 15:09:09.950532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.950558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.958502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f57b0 00:22:46.580 [2024-11-18 15:09:09.959467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.959494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.967575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e7c50 00:22:46.580 [2024-11-18 15:09:09.968289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.968325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.976606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f2948 00:22:46.580 [2024-11-18 15:09:09.977315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.977351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.985625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e8d30 00:22:46.580 [2024-11-18 15:09:09.986294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.986331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:09.994639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f2510 00:22:46.580 [2024-11-18 15:09:09.995274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:09.995310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.004756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190efae0 00:22:46.580 [2024-11-18 15:09:10.005543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.005570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.014789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e38d0 00:22:46.580 [2024-11-18 15:09:10.015443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.015483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.025497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fef90 00:22:46.580 [2024-11-18 15:09:10.027197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.027226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.036015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190df988 00:22:46.580 [2024-11-18 15:09:10.037460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.037487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.045865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e0ea0 00:22:46.580 [2024-11-18 15:09:10.046466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.046502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.055242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ddc00 00:22:46.580 [2024-11-18 15:09:10.055817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.055844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.064673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f8e88 00:22:46.580 [2024-11-18 15:09:10.065265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.065294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.074061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190efae0 00:22:46.580 [2024-11-18 15:09:10.074653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.074680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.083208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fdeb0 00:22:46.580 [2024-11-18 15:09:10.084268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.084295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.092461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190df988 00:22:46.580 [2024-11-18 15:09:10.093788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.093816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.101701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e84c0 00:22:46.580 [2024-11-18 15:09:10.102417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.580 [2024-11-18 15:09:10.102444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:46.580 [2024-11-18 15:09:10.110803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e5a90 00:22:46.581 [2024-11-18 15:09:10.111471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.581 [2024-11-18 15:09:10.111497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:46.581 [2024-11-18 15:09:10.119893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f9f68 00:22:46.581 [2024-11-18 15:09:10.120500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.581 [2024-11-18 15:09:10.120526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:46.581 [2024-11-18 15:09:10.128938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190efae0 00:22:46.581 [2024-11-18 15:09:10.129675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.581 [2024-11-18 15:09:10.129702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:46.581 [2024-11-18 15:09:10.137994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f92c0 00:22:46.581 [2024-11-18 15:09:10.138675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.581 [2024-11-18 15:09:10.138702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:46.581 [2024-11-18 15:09:10.147048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190feb58 00:22:46.581 [2024-11-18 15:09:10.147711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.581 [2024-11-18 15:09:10.147747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:46.581 [2024-11-18 15:09:10.156130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190df550 00:22:46.581 [2024-11-18 15:09:10.156806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.581 [2024-11-18 15:09:10.156847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:46.581 [2024-11-18 15:09:10.165247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ea680 00:22:46.581 [2024-11-18 15:09:10.166217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.581 [2024-11-18 15:09:10.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:46.840 [2024-11-18 15:09:10.174224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fb048 00:22:46.841 [2024-11-18 15:09:10.175164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.175190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.183599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f0ff8 00:22:46.841 [2024-11-18 15:09:10.184205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.184234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.192797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f6458 00:22:46.841 [2024-11-18 15:09:10.193524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.193551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.201950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e1710 00:22:46.841 [2024-11-18 15:09:10.202308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.202342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.211042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f57b0 00:22:46.841 [2024-11-18 15:09:10.211389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.211413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.220152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f46d0 00:22:46.841 [2024-11-18 15:09:10.220473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.220497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.229196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e8088 00:22:46.841 [2024-11-18 15:09:10.229559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.229583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.238333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f46d0 00:22:46.841 [2024-11-18 15:09:10.238600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.238624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.247237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f57b0 00:22:46.841 [2024-11-18 15:09:10.247603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.247626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.256181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f7da8 00:22:46.841 [2024-11-18 15:09:10.257172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.257199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.265412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f7da8 00:22:46.841 [2024-11-18 15:09:10.266509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.266534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.274506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f7da8 00:22:46.841 [2024-11-18 15:09:10.275402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.275440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.284129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f2948 00:22:46.841 [2024-11-18 15:09:10.285179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.285216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.293551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f35f0 00:22:46.841 [2024-11-18 15:09:10.294244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.294272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.302948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e7c50 00:22:46.841 [2024-11-18 15:09:10.303724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.303752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.311948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190de8a8 00:22:46.841 [2024-11-18 15:09:10.312815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.312841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.320262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e73e0 00:22:46.841 [2024-11-18 15:09:10.320507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.320527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.331303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e27f0 00:22:46.841 [2024-11-18 15:09:10.332195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.332220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.339482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e84c0 00:22:46.841 [2024-11-18 15:09:10.340437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.340463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.347925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e5a90 00:22:46.841 [2024-11-18 15:09:10.348072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.348090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.356987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f5378 00:22:46.841 [2024-11-18 15:09:10.357304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.357337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.365931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e8d30 00:22:46.841 [2024-11-18 15:09:10.366186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.366211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.374842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e6738 00:22:46.841 [2024-11-18 15:09:10.375072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.375091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.383707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190dfdc0 00:22:46.841 [2024-11-18 15:09:10.383915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.383934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.392707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e7818 00:22:46.841 [2024-11-18 15:09:10.392889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.392909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.403120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e27f0 00:22:46.841 [2024-11-18 15:09:10.404494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.404520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.412327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e5a90 00:22:46.841 [2024-11-18 15:09:10.413725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.841 [2024-11-18 15:09:10.413750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.841 [2024-11-18 15:09:10.421613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e8088 00:22:46.842 [2024-11-18 15:09:10.423025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.842 [2024-11-18 15:09:10.423052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:47.101 [2024-11-18 15:09:10.429988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e88f8 00:22:47.101 [2024-11-18 15:09:10.431037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.101 [2024-11-18 15:09:10.431065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:47.101 [2024-11-18 15:09:10.439002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e9168 00:22:47.101 [2024-11-18 15:09:10.440200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.101 [2024-11-18 15:09:10.440228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:47.101 [2024-11-18 15:09:10.449512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eee38 00:22:47.101 [2024-11-18 15:09:10.450933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.101 [2024-11-18 15:09:10.450960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:47.101 [2024-11-18 15:09:10.457565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e0a68 00:22:47.101 [2024-11-18 15:09:10.458417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.458443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.467819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fa3a0 00:22:47.102 [2024-11-18 15:09:10.468579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.468610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.475300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f8a50 00:22:47.102 [2024-11-18 15:09:10.476369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.476394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.484664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e3498 00:22:47.102 [2024-11-18 15:09:10.485819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.485847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.494102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f9f68 00:22:47.102 [2024-11-18 15:09:10.495142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.495169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.503277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e1f80 00:22:47.102 [2024-11-18 15:09:10.504253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.504279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.513033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e49b0 00:22:47.102 [2024-11-18 15:09:10.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.513894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.521237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f31b8 00:22:47.102 [2024-11-18 15:09:10.522070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.522100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.530823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4de8 00:22:47.102 [2024-11-18 15:09:10.531835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.531862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.541014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f8618 00:22:47.102 [2024-11-18 15:09:10.541954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.541980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.549811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eff18 00:22:47.102 [2024-11-18 15:09:10.550343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.550379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.561589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fac10 00:22:47.102 [2024-11-18 15:09:10.562744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.562769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.568283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4de8 00:22:47.102 [2024-11-18 15:09:10.568631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.568654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.578038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190dfdc0 00:22:47.102 [2024-11-18 15:09:10.579177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.579204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.588105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f0788 00:22:47.102 [2024-11-18 15:09:10.588890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.588917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.596356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f8618 00:22:47.102 [2024-11-18 15:09:10.597204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.597232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.605476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f20d8 00:22:47.102 [2024-11-18 15:09:10.605792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.605815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.614589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e4de8 00:22:47.102 [2024-11-18 15:09:10.615083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.615110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.624448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f46d0 00:22:47.102 [2024-11-18 15:09:10.625698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.625725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.633881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e3060 00:22:47.102 [2024-11-18 15:09:10.634495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.634520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.642076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fc128 00:22:47.102 [2024-11-18 15:09:10.642757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.642783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.652495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fe720 00:22:47.102 [2024-11-18 15:09:10.653187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.653215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.662662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fb8b8 00:22:47.102 [2024-11-18 15:09:10.663308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.663371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.673777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f6890 00:22:47.102 [2024-11-18 15:09:10.674470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.674496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:47.102 [2024-11-18 15:09:10.682234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f6458 00:22:47.102 [2024-11-18 15:09:10.683215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.102 [2024-11-18 15:09:10.683242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:47.362 [2024-11-18 15:09:10.692329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f6890 00:22:47.362 [2024-11-18 15:09:10.693332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.362 [2024-11-18 15:09:10.693372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.362 [2024-11-18 15:09:10.700674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e23b8 00:22:47.362 [2024-11-18 15:09:10.701512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.362 [2024-11-18 15:09:10.701550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:47.362 [2024-11-18 15:09:10.710189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190df988 00:22:47.362 [2024-11-18 15:09:10.711101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.362 [2024-11-18 15:09:10.711128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:47.362 [2024-11-18 15:09:10.720374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eaef0 00:22:47.362 [2024-11-18 15:09:10.721377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.362 [2024-11-18 15:09:10.721439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:47.362 [2024-11-18 15:09:10.729685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e23b8 00:22:47.363 [2024-11-18 15:09:10.729970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.729993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.738869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f46d0 00:22:47.363 [2024-11-18 15:09:10.739702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.739750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.748219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fb480 00:22:47.363 [2024-11-18 15:09:10.748904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.748931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.757544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f4f40 00:22:47.363 [2024-11-18 15:09:10.758202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.758229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.767581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fdeb0 00:22:47.363 [2024-11-18 15:09:10.767838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.767857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.776832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f46d0 00:22:47.363 [2024-11-18 15:09:10.778057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.778084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.787825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ff3c8 00:22:47.363 [2024-11-18 15:09:10.788586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.788612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.796465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f35f0 00:22:47.363 [2024-11-18 15:09:10.797497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.797523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.805651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e8088 00:22:47.363 [2024-11-18 15:09:10.805883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.805913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.814701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e0630 00:22:47.363 [2024-11-18 15:09:10.815066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.815090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.823711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ed4e8 00:22:47.363 [2024-11-18 15:09:10.824051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.824075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.832735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e5ec8 00:22:47.363 [2024-11-18 15:09:10.833052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.833084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.841880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e6b70 00:22:47.363 [2024-11-18 15:09:10.842177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.842200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.851038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e84c0 00:22:47.363 [2024-11-18 15:09:10.851340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.851364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.860908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f20d8 00:22:47.363 [2024-11-18 15:09:10.861202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.861226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.871059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f1ca0 00:22:47.363 [2024-11-18 15:09:10.871298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.871337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.880759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f31b8 00:22:47.363 [2024-11-18 15:09:10.880953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.880971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.890186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e23b8 00:22:47.363 [2024-11-18 15:09:10.890641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.890666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.899352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eaef0 00:22:47.363 [2024-11-18 15:09:10.900491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.900517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.908885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f0ff8 00:22:47.363 [2024-11-18 15:09:10.909920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.909948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.918293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f8a50 00:22:47.363 [2024-11-18 15:09:10.918986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.919013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.926911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fb048 00:22:47.363 [2024-11-18 15:09:10.928006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.363 [2024-11-18 15:09:10.928033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:47.363 [2024-11-18 15:09:10.936149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190f0ff8 00:22:47.364 [2024-11-18 15:09:10.936393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.364 [2024-11-18 15:09:10.936412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:47.364 [2024-11-18 15:09:10.945228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190ebfd0 00:22:47.364 [2024-11-18 15:09:10.945927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.364 [2024-11-18 15:09:10.945955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:47.623 [2024-11-18 15:09:10.954691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190eee38 00:22:47.623 [2024-11-18 15:09:10.955677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.623 [2024-11-18 15:09:10.955703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:47.623 [2024-11-18 15:09:10.963711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190de470 00:22:47.623 [2024-11-18 15:09:10.964824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.623 [2024-11-18 15:09:10.964851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:47.623 [2024-11-18 15:09:10.972758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fef90 00:22:47.623 [2024-11-18 15:09:10.974005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.623 [2024-11-18 15:09:10.974032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:47.623 [2024-11-18 15:09:10.982505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190fb8b8 00:22:47.623 [2024-11-18 15:09:10.983709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.623 [2024-11-18 15:09:10.983735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:47.623 [2024-11-18 15:09:10.991694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b0e0) with pdu=0x2000190e95a0 00:22:47.623 [2024-11-18 15:09:10.992160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.623 [2024-11-18 15:09:10.992187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:47.623 00:22:47.623 Latency(us) 00:22:47.623 [2024-11-18T15:09:11.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.623 [2024-11-18T15:09:11.213Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:47.623 nvme0n1 : 2.00 27322.05 106.73 0.00 0.00 4679.20 1869.27 13702.98 00:22:47.623 [2024-11-18T15:09:11.213Z] =================================================================================================================== 00:22:47.623 [2024-11-18T15:09:11.213Z] Total : 27322.05 106.73 0.00 0.00 4679.20 1869.27 13702.98 00:22:47.623 0 00:22:47.623 15:09:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:47.623 15:09:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:47.623 15:09:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:47.623 15:09:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:47.623 | .driver_specific 00:22:47.623 | .nvme_error 00:22:47.623 | .status_code 00:22:47.623 | .command_transient_transport_error' 00:22:47.882 15:09:11 -- host/digest.sh@71 -- # (( 214 > 0 )) 00:22:47.882 15:09:11 -- host/digest.sh@73 -- # killprocess 98070 00:22:47.882 15:09:11 -- common/autotest_common.sh@936 -- # '[' -z 98070 ']' 00:22:47.882 15:09:11 -- common/autotest_common.sh@940 -- # kill -0 98070 00:22:47.882 15:09:11 -- common/autotest_common.sh@941 -- # uname 00:22:47.882 15:09:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.882 15:09:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98070 00:22:47.882 killing process with pid 98070 00:22:47.882 Received shutdown signal, test time was about 2.000000 seconds 00:22:47.882 00:22:47.882 Latency(us) 00:22:47.882 [2024-11-18T15:09:11.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.882 [2024-11-18T15:09:11.472Z] =================================================================================================================== 00:22:47.882 [2024-11-18T15:09:11.472Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.882 15:09:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:47.882 15:09:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:47.882 15:09:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98070' 00:22:47.882 15:09:11 -- common/autotest_common.sh@955 -- # kill 98070 00:22:47.882 15:09:11 -- common/autotest_common.sh@960 -- # wait 98070 00:22:48.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:48.141 15:09:11 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:48.141 15:09:11 -- host/digest.sh@54 -- # local rw bs qd 00:22:48.141 15:09:11 -- host/digest.sh@56 -- # rw=randwrite 00:22:48.141 15:09:11 -- host/digest.sh@56 -- # bs=131072 00:22:48.141 15:09:11 -- host/digest.sh@56 -- # qd=16 00:22:48.141 15:09:11 -- host/digest.sh@58 -- # bperfpid=98155 00:22:48.141 15:09:11 -- host/digest.sh@60 -- # waitforlisten 98155 /var/tmp/bperf.sock 00:22:48.141 15:09:11 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:48.141 15:09:11 -- common/autotest_common.sh@829 -- # '[' -z 98155 ']' 00:22:48.141 15:09:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:48.141 15:09:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.141 15:09:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:48.141 15:09:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.141 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:22:48.141 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:48.141 Zero copy mechanism will not be used. 00:22:48.141 [2024-11-18 15:09:11.627928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:48.141 [2024-11-18 15:09:11.628031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98155 ] 00:22:48.400 [2024-11-18 15:09:11.766526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.400 [2024-11-18 15:09:11.828209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.336 15:09:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.336 15:09:12 -- common/autotest_common.sh@862 -- # return 0 00:22:49.336 15:09:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:49.336 15:09:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:49.337 15:09:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:49.337 15:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.337 15:09:12 -- common/autotest_common.sh@10 -- # set +x 00:22:49.337 15:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.337 15:09:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:49.337 15:09:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:49.595 nvme0n1 00:22:49.595 15:09:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:49.595 15:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.595 15:09:13 -- common/autotest_common.sh@10 -- # set +x 00:22:49.855 15:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.855 15:09:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:49.855 15:09:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:49.855 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:49.855 Zero copy mechanism will not be used. 00:22:49.855 Running I/O for 2 seconds... 00:22:49.855 [2024-11-18 15:09:13.279482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.279828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.279861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.283716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.283979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.284024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.287754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.287842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.287864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.291772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.291849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.291871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.295801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.295928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.295950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.299842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.299918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.299938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.304054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.304205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.304226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.308176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.308364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.308386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.312257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.312436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.312457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.316345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.316502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.316523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.320380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.320462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.320482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.324348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.324448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.324468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.328385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.328507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.328527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.332540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.332681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-11-18 15:09:13.332702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-11-18 15:09:13.336555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.855 [2024-11-18 15:09:13.336752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.336773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.340592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.340766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.340786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.344563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.344769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.344789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.348433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.348598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.348618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.352508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.352643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.352663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.356570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.356679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.356700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.360508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.360597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.360618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.364480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.364610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.364631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.368527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.368681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.368702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.372544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.372719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.372739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.376692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.376882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.376902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.380658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.380837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.380857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.384772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.384898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.384918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.388785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.388870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.388891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.392703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.392812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.392833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.396742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.396867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.396888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.400825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.401017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.401037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.405030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.405249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.405270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.409099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.409302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.409334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.413156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.413302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.413333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.417236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.417374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.417406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.421272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.421383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.421414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.856 [2024-11-18 15:09:13.425415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.856 [2024-11-18 15:09:13.425486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.856 [2024-11-18 15:09:13.425506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.857 [2024-11-18 15:09:13.429439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.857 [2024-11-18 15:09:13.429563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.857 [2024-11-18 15:09:13.429583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.857 [2024-11-18 15:09:13.433508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.857 [2024-11-18 15:09:13.433741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.857 [2024-11-18 15:09:13.433761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.857 [2024-11-18 15:09:13.437653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:49.857 [2024-11-18 15:09:13.437840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.857 [2024-11-18 15:09:13.437861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.441900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.442162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.442206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.446123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.446335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.446355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.450260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.450394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.450415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.454375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.454486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.454506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.458392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.458500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.458522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.462450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.462611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.462633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.466599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.466856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.466899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.470667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.470804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.470825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.474729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.474858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.474878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.478783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.478897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.478917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.482820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.482947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.482967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.486744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.486838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.486859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.490664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.490802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.490822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.494773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.494925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.117 [2024-11-18 15:09:13.494945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.117 [2024-11-18 15:09:13.498715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.117 [2024-11-18 15:09:13.498979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.499020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.502732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.502905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.502927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.506784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.506950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.506971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.510782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.510861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.510882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.514800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.514932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.514953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.518828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.518951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.518971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.522833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.522943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.522964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.526951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.527100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.527122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.530976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.531186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.531205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.535015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.535195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.535216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.539089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.539216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.539236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.543154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.543230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.543250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.547353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.547481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.547502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.551373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.551515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.551536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.555462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.555539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.555560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.559532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.559679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.559701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.563532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.563772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.563792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.567684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.567884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.567905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.571708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.571862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.571883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.575710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.575857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.575877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.579730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.579882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.579902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.583705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.583848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.583869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.587694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.587771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.587792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.591777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.591928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.591949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.595734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.595953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.595974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.599777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.599982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.600003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.603781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.603906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.118 [2024-11-18 15:09:13.603926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.118 [2024-11-18 15:09:13.607856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.118 [2024-11-18 15:09:13.607946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.607966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.611965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.612090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.612111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.616020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.616115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.616136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.619960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.620040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.620060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.624020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.624167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.624203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.628049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.628307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.628346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.632177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.632355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.632377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.636241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.636381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.636402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.640331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.640440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.640461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.644343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.644489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.644510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.648359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.648486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.648506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.652413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.652513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.652533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.656584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.656746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.656767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.660624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.660884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.660912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.664729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.664924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.664945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.668738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.668886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.668906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.672700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.672828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.672848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.676846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.676968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.676989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.680867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.680995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.681015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.684899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.685012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.685032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.689053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.689226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.689247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.693046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.693281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.693301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.697355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.697589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.697610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.119 [2024-11-18 15:09:13.701373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.119 [2024-11-18 15:09:13.701594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.119 [2024-11-18 15:09:13.701645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.705639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.705715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.705735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.709701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.709848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.709869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.713947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.714076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.714097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.717885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.718012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.718033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.721948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.722091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.722111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.725935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.726173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.726206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.730023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.730195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.730215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.733948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.734048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.734068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.737921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.738000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.738021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.741961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.742091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.742112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.745984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.746068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.746088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.749998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.750085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.750105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.754062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.754215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.754236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.758084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.380 [2024-11-18 15:09:13.758254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.380 [2024-11-18 15:09:13.758275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.380 [2024-11-18 15:09:13.762049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.762139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.762160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.766155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.766350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.766371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.770178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.770343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.770363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.774296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.774481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.774501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.778280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.778385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.778406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.782326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.782446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.782467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.786486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.786650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.786671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.790518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.790707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.790738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.794462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.794589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.794609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.798590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.798729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.798749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.802596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.802687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.802707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.806619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.806759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.806780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.810719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.810854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.810875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.814742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.814854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.814874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.818894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.819042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.819064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.822813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.823035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.823055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.826874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.827049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.827069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.830942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.831065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.831085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.834960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.835046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.835066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.838969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.839123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.839143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.842909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.843032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.843052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.846869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.846968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.846988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.381 [2024-11-18 15:09:13.851004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.381 [2024-11-18 15:09:13.851148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.381 [2024-11-18 15:09:13.851169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.854971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.855180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.855201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.858970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.859169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.859190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.862868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.863026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.863047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.866837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.866930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.866950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.870875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.871034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.871055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.874932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.875091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.875111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.878931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.879035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.879057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.882963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.883120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.883141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.887029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.887257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.887290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.891041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.891229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.891249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.895033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.895206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.895227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.899020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.899112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.899133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.903114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.903255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.903275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.907180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.907302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.907347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.911410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.911523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.911543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.915885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.916050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.916071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.920207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.920477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.920499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.925172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.925380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.925407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.929710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.929818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.929838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.933992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.934072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.934093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.938398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.382 [2024-11-18 15:09:13.938578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.382 [2024-11-18 15:09:13.938599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.382 [2024-11-18 15:09:13.942785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.383 [2024-11-18 15:09:13.942889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.383 [2024-11-18 15:09:13.942910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.383 [2024-11-18 15:09:13.947051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.383 [2024-11-18 15:09:13.947153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.383 [2024-11-18 15:09:13.947173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.383 [2024-11-18 15:09:13.951082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.383 [2024-11-18 15:09:13.951228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.383 [2024-11-18 15:09:13.951249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.383 [2024-11-18 15:09:13.955082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.383 [2024-11-18 15:09:13.955366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.383 [2024-11-18 15:09:13.955399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.383 [2024-11-18 15:09:13.959070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.383 [2024-11-18 15:09:13.959251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.383 [2024-11-18 15:09:13.959271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.383 [2024-11-18 15:09:13.963211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.383 [2024-11-18 15:09:13.963331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.383 [2024-11-18 15:09:13.963364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.967427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.967522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.967542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.971503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.971635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.971655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.975603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.975712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.975733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.979633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.979743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.979764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.983732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.983876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.983897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.987707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.987915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.987936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.991746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.991955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.991975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.995849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.996012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.996032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:13.999780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:13.999861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:13.999882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.003835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.004015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.004035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.007850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.007969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.007989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.011828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.011926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.011946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.015875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.016025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.016046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.019872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.020086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.020106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.023863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.024039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.024059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.027920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.028054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.028075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.031852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.031936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.031956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.035887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.036010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.036031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.039956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.040066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.644 [2024-11-18 15:09:14.040086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.644 [2024-11-18 15:09:14.043940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.644 [2024-11-18 15:09:14.044056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.044076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.048104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.048250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.048271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.052104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.052390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.052423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.056058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.056265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.056285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.060080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.060258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.060278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.064051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.064127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.064147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.068146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.068271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.068291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.072147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.072263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.072282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.076127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.076208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.076228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.080195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.080357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.080378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.084117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.084286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.088101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.088209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.088230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.092292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.092427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.092447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.096310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.096419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.096440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.100429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.100553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.100573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.104432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.104538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.104559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.108485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.108576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.108596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.112592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.112765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.112786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.116587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.116812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.120752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.120943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.120964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.124759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.124933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.124953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.128804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.128895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.128917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.132885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.133037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.645 [2024-11-18 15:09:14.133058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.645 [2024-11-18 15:09:14.136910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.645 [2024-11-18 15:09:14.137011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.137032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.140925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.140999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.141020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.144992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.145158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.145179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.148989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.149262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.149282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.153163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.153372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.153394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.157192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.157349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.157369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.161329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.161439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.161460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.165505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.165655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.165675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.169622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.169749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.169770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.173663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.173774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.173794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.177761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.177916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.177937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.181798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.182037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.182066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.186069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.186230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.186250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.189999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.190175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.190196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.193971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.194058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.194079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.198098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.198258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.198280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.202076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.202179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.202199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.205949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.206050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.210188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.210335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.210356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.214170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.214360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.214380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.218282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.218496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.218517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.222269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.646 [2024-11-18 15:09:14.222456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.646 [2024-11-18 15:09:14.222476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.646 [2024-11-18 15:09:14.226319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.647 [2024-11-18 15:09:14.226507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.647 [2024-11-18 15:09:14.226527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.230537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.230734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.230769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.234656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.234779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.234799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.238851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.238941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.238961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.243005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.243155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.243175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.246966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.247201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.247229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.251190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.251438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.251459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.255875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.255998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.256018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.260263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.260388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.260409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.264739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.264896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.264916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.269026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.269171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.269195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.273426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.273550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.273571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.277682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.277829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.277850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.281826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.282062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.282093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.285955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.286116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.289996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.290141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.290162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.294177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.294260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.294281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.298268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.298424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.298445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.302294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.302428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.302449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.306275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.306388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.908 [2024-11-18 15:09:14.306408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.908 [2024-11-18 15:09:14.310695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.908 [2024-11-18 15:09:14.310852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.310872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.314707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.314890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.314910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.318797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.318974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.318994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.322847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.323016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.323036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.327276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.327406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.327427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.331329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.331490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.331512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.335433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.335550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.335570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.339505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.339599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.339620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.343571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.343736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.343756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.347711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.347957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.347986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.351828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.352026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.352047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.355766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.355929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.355949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.359781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.359889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.359910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.364161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.364305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.364339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.368227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.368386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.368407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.372213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.372287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.372308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.376366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.376537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.376557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.380719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.380984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.381016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.384808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.384979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.384999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.388893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.389045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.389065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.392942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.393048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.393076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.397057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.397214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.397235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.401261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.401385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.401417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.405325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.405487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.405523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.409644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.409823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.409843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.413735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.413948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.413967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.418105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.418286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.909 [2024-11-18 15:09:14.418306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.909 [2024-11-18 15:09:14.422199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.909 [2024-11-18 15:09:14.422299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.422320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.426283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.426416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.426436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.430304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.430467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.430487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.434607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.434718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.434738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.438468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.438551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.438571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.442600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.442755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.442776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.446610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.446860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.446881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.450866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.451021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.451042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.454931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.455088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.455108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.458870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.458947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.458967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.462866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.463005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.463026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.466824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.466922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.466943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.470899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.470990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.471010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.474964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.475111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.475131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.478969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.479186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.479213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.482977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.483161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.483181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.486931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.487099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.487120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:50.910 [2024-11-18 15:09:14.491236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:50.910 [2024-11-18 15:09:14.491324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.910 [2024-11-18 15:09:14.491358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.495386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.495544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.495565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.499507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.499619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.499640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.503559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.503670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.503690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.507720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.507878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.507899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.511770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.511994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.512014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.515835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.516034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.516054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.519863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.520002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.520023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.523952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.524048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.524068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.528012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.528170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.171 [2024-11-18 15:09:14.528190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.171 [2024-11-18 15:09:14.532029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.171 [2024-11-18 15:09:14.532107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.532128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.536034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.536116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.536136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.540142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.540290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.540310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.544285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.544496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.544518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.548344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.548438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.548459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.552482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.552608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.552628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.556583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.556710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.556730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.560612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.560790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.560810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.564675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.564765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.564785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.568718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.568793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.568814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.572859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.573007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.573028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.576845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.576969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.576990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.580876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.580971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.580992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.585015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.585147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.585169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.589055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.589184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.589205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.593190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.593344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.593365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.597201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.597342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.597362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.601274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.601400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.601423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.605454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.605620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.605641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.609422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.609645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.609664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.613492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.613657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.613677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.617491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.617670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.617691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.621526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.621620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.621640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.625586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.625733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.625754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.629598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.629733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.629753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.633641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.633752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.633772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.637792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.637947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.637968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.172 [2024-11-18 15:09:14.641797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.172 [2024-11-18 15:09:14.642035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.172 [2024-11-18 15:09:14.642065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.645947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.646148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.646169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.649968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.650132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.650152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.653962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.654039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.654060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.658061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.658192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.658212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.661958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.662070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.662090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.665992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.666102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.666123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.670050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.670203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.670224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.673998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.674128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.674148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.677985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.678082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.678101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.682038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.682211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.682231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.686138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.686248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.686267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.690133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.690292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.690313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.694311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.694427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.694448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.698270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.698390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.698411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.702371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.702521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.702542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.706411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.706582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.706602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.710345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.710462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.710482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.714413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.714531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.714552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.718373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.718460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.718480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.722438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.722563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.722584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.726463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.726576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.726597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.730467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.730561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.730581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.734483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.734632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.734653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.738401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.738599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.738619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.742464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.742677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.742704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.746426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.746594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.746614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.173 [2024-11-18 15:09:14.750393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.173 [2024-11-18 15:09:14.750481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.173 [2024-11-18 15:09:14.750501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.174 [2024-11-18 15:09:14.754453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.174 [2024-11-18 15:09:14.754593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.174 [2024-11-18 15:09:14.754615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.758653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.758756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.758776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.762742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.762865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.762887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.766945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.767092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.767112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.770946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.771050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.771070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.774912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.774989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.775009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.778912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.779071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.779092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.782934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.783106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.783127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.787120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.787309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.787351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.791218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.791341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.791361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.795263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.795408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.799399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.799556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.799576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.803409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.803509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.803529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.807277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.807394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.807414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.811482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.811629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.811649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.815424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.815641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.815661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.819614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.819746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.819766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.823652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.823797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.823817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.827703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.827814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.827834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.831817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.434 [2024-11-18 15:09:14.831956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.434 [2024-11-18 15:09:14.831976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.434 [2024-11-18 15:09:14.835905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.836036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.836056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.839974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.840064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.840084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.844066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.844211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.844231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.848035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.848273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.848302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.852042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.852150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.852170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.856133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.856310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.856352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.860174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.860260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.860281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.864293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.864482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.864503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.868304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.868443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.868463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.872355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.872464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.872483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.876432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.876600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.876621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.880528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.880808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.880849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.884687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.884886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.884906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.888857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.888984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.889004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.892982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.893128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.893148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.897151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.897299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.897329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.901129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.901224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.901244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.905246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.905353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.905374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.909309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.909477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.909498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.913268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.913498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.913519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.917254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.917441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.917462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.921256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.921390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.921410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.925273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.925401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.925430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.929373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.929570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.929591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.933419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.933567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.933587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.937733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.937862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.937882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.435 [2024-11-18 15:09:14.942239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.435 [2024-11-18 15:09:14.942437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.435 [2024-11-18 15:09:14.942460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.947065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.947303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.947365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.951565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.951795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.951815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.955993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.956125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.956145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.960314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.960457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.960477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.964547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.964709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.964729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.968645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.968769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.968789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.972656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.972767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.972787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.976844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.977004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.977024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.980880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.981106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.981127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.985034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.985261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.985288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.989019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.989183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.989205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.993000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.993116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.993137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:14.996980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:14.997130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:14.997151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:15.000987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:15.001160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:15.001181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:15.005042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:15.005177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:15.005197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:15.009105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:15.009255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:15.009275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:15.013145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:15.013331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:15.013352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.436 [2024-11-18 15:09:15.017313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.436 [2024-11-18 15:09:15.017595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.436 [2024-11-18 15:09:15.017625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.021679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.021834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.021854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.025695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.025808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.025844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.029903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.030030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.030050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.033875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.033987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.034007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.037919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.038028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.038048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.042029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.042177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.042197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.046062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.046251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.046271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.050141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.050310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.050342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.054185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.054320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.054353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.058083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.058162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.058182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.062175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.062300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.062332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.066166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.066263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.066283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.070153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.070247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.070267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.074214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.074383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.074404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.078213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.078462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.078489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.082192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.082395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.082415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.086247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.086399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.086420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.090212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.090294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.090315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.094409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.094549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.094569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.098471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.098565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.098585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.102498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.102585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.102605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.106538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.106693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.106714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.110512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.110742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.110762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.114720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.114885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.114905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.118672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.118783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.118804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.698 [2024-11-18 15:09:15.122737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.698 [2024-11-18 15:09:15.122819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.698 [2024-11-18 15:09:15.122840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.126815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.126947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.126967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.130911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.131006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.131026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.134925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.135020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.135041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.139097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.139245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.139265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.143164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.143401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.143422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.147622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.147826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.147846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.152210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.152337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.152358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.156812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.156956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.156976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.161576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.161719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.161740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.165832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.165940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.165960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.169878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.169961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.169983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.174007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.174161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.174181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.178056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.178261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.178281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.182207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.182401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.182422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.186393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.186520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.186540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.190472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.190555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.190576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.194559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.194700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.194720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.198598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.198704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.198724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.202536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.202646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.202668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.206612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.206775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.206795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.210647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.210905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.210936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.214700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.214835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.214856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.218672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.218823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.218843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.222638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.222723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.222743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.226681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.226837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.226857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.230707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.230817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.230837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.699 [2024-11-18 15:09:15.234698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.699 [2024-11-18 15:09:15.234804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.699 [2024-11-18 15:09:15.234824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.238802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.238948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.238968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.242754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.242923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.242942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.246850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.247078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.247106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.250827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.250930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.250950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.254805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.254882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.254902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.258852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.258978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.258998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.262839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.262949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.262969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.266885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.266987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.267008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.700 [2024-11-18 15:09:15.270974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203b280) with pdu=0x2000190fef90 00:22:51.700 [2024-11-18 15:09:15.271120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.700 [2024-11-18 15:09:15.271140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.700 00:22:51.700 Latency(us) 00:22:51.700 [2024-11-18T15:09:15.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.700 [2024-11-18T15:09:15.290Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:51.700 nvme0n1 : 2.00 7587.44 948.43 0.00 0.00 2104.57 1504.35 4974.78 00:22:51.700 [2024-11-18T15:09:15.290Z] =================================================================================================================== 00:22:51.700 [2024-11-18T15:09:15.290Z] Total : 7587.44 948.43 0.00 0.00 2104.57 1504.35 4974.78 00:22:51.700 0 00:22:51.959 15:09:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:51.959 15:09:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:51.959 15:09:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:51.959 15:09:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:51.959 | .driver_specific 00:22:51.959 | .nvme_error 00:22:51.959 | .status_code 00:22:51.959 | .command_transient_transport_error' 00:22:52.218 15:09:15 -- host/digest.sh@71 -- # (( 489 > 0 )) 00:22:52.218 15:09:15 -- host/digest.sh@73 -- # killprocess 98155 00:22:52.218 15:09:15 -- common/autotest_common.sh@936 -- # '[' -z 98155 ']' 00:22:52.218 15:09:15 -- common/autotest_common.sh@940 -- # kill -0 98155 00:22:52.218 15:09:15 -- common/autotest_common.sh@941 -- # uname 00:22:52.218 15:09:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.218 15:09:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98155 00:22:52.218 killing process with pid 98155 00:22:52.218 Received shutdown signal, test time was about 2.000000 seconds 00:22:52.218 00:22:52.218 Latency(us) 00:22:52.218 [2024-11-18T15:09:15.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.218 [2024-11-18T15:09:15.808Z] =================================================================================================================== 00:22:52.218 [2024-11-18T15:09:15.808Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.218 15:09:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:52.218 15:09:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:52.218 15:09:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98155' 00:22:52.218 15:09:15 -- common/autotest_common.sh@955 -- # kill 98155 00:22:52.218 15:09:15 -- common/autotest_common.sh@960 -- # wait 98155 00:22:52.477 15:09:15 -- host/digest.sh@115 -- # killprocess 97844 00:22:52.477 15:09:15 -- common/autotest_common.sh@936 -- # '[' -z 97844 ']' 00:22:52.477 15:09:15 -- common/autotest_common.sh@940 -- # kill -0 97844 00:22:52.477 15:09:15 -- common/autotest_common.sh@941 -- # uname 00:22:52.477 15:09:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.477 15:09:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97844 00:22:52.477 killing process with pid 97844 00:22:52.477 15:09:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:52.477 15:09:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:52.477 15:09:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97844' 00:22:52.477 15:09:15 -- common/autotest_common.sh@955 -- # kill 97844 00:22:52.477 15:09:15 -- common/autotest_common.sh@960 -- # wait 97844 00:22:52.477 ************************************ 00:22:52.477 END TEST nvmf_digest_error 00:22:52.477 ************************************ 00:22:52.477 00:22:52.477 real 0m18.113s 00:22:52.477 user 0m33.080s 00:22:52.477 sys 0m5.500s 00:22:52.477 15:09:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:52.477 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:22:52.736 15:09:16 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:52.736 15:09:16 -- host/digest.sh@139 -- # nvmftestfini 00:22:52.736 15:09:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:52.736 15:09:16 -- nvmf/common.sh@116 -- # sync 00:22:52.736 15:09:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:52.736 15:09:16 -- nvmf/common.sh@119 -- # set +e 00:22:52.736 15:09:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:52.736 15:09:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:52.736 rmmod nvme_tcp 00:22:52.736 rmmod nvme_fabrics 00:22:52.736 rmmod nvme_keyring 00:22:52.736 15:09:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:52.736 15:09:16 -- nvmf/common.sh@123 -- # set -e 00:22:52.736 15:09:16 -- nvmf/common.sh@124 -- # return 0 00:22:52.736 15:09:16 -- nvmf/common.sh@477 -- # '[' -n 97844 ']' 00:22:52.736 15:09:16 -- nvmf/common.sh@478 -- # killprocess 97844 00:22:52.736 15:09:16 -- common/autotest_common.sh@936 -- # '[' -z 97844 ']' 00:22:52.736 15:09:16 -- common/autotest_common.sh@940 -- # kill -0 97844 00:22:52.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97844) - No such process 00:22:52.736 Process with pid 97844 is not found 00:22:52.736 15:09:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97844 is not found' 00:22:52.736 15:09:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:52.736 15:09:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:52.736 15:09:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:52.736 15:09:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.736 15:09:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:52.736 15:09:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.736 15:09:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.736 15:09:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.736 15:09:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:52.736 00:22:52.736 real 0m36.526s 00:22:52.737 user 1m5.269s 00:22:52.737 sys 0m11.069s 00:22:52.737 15:09:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:52.737 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:22:52.737 ************************************ 00:22:52.737 END TEST nvmf_digest 00:22:52.737 ************************************ 00:22:52.737 15:09:16 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:52.737 15:09:16 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:52.737 15:09:16 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:52.737 15:09:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:52.737 15:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:52.737 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:22:52.997 ************************************ 00:22:52.997 START TEST nvmf_mdns_discovery 00:22:52.998 ************************************ 00:22:52.998 15:09:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:52.998 * Looking for test storage... 00:22:52.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:52.998 15:09:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:52.998 15:09:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:52.998 15:09:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:52.998 15:09:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:52.998 15:09:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:52.998 15:09:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:52.998 15:09:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:52.998 15:09:16 -- scripts/common.sh@335 -- # IFS=.-: 00:22:52.998 15:09:16 -- scripts/common.sh@335 -- # read -ra ver1 00:22:52.998 15:09:16 -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.998 15:09:16 -- scripts/common.sh@336 -- # read -ra ver2 00:22:52.998 15:09:16 -- scripts/common.sh@337 -- # local 'op=<' 00:22:52.998 15:09:16 -- scripts/common.sh@339 -- # ver1_l=2 00:22:52.998 15:09:16 -- scripts/common.sh@340 -- # ver2_l=1 00:22:52.998 15:09:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:52.998 15:09:16 -- scripts/common.sh@343 -- # case "$op" in 00:22:52.998 15:09:16 -- scripts/common.sh@344 -- # : 1 00:22:52.998 15:09:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:52.998 15:09:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.998 15:09:16 -- scripts/common.sh@364 -- # decimal 1 00:22:52.998 15:09:16 -- scripts/common.sh@352 -- # local d=1 00:22:52.998 15:09:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.998 15:09:16 -- scripts/common.sh@354 -- # echo 1 00:22:52.998 15:09:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:52.998 15:09:16 -- scripts/common.sh@365 -- # decimal 2 00:22:52.998 15:09:16 -- scripts/common.sh@352 -- # local d=2 00:22:52.998 15:09:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.998 15:09:16 -- scripts/common.sh@354 -- # echo 2 00:22:52.998 15:09:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:52.998 15:09:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:52.998 15:09:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:52.998 15:09:16 -- scripts/common.sh@367 -- # return 0 00:22:52.998 15:09:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.998 15:09:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:52.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.998 --rc genhtml_branch_coverage=1 00:22:52.998 --rc genhtml_function_coverage=1 00:22:52.998 --rc genhtml_legend=1 00:22:52.998 --rc geninfo_all_blocks=1 00:22:52.998 --rc geninfo_unexecuted_blocks=1 00:22:52.998 00:22:52.998 ' 00:22:52.998 15:09:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:52.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.998 --rc genhtml_branch_coverage=1 00:22:52.998 --rc genhtml_function_coverage=1 00:22:52.998 --rc genhtml_legend=1 00:22:52.998 --rc geninfo_all_blocks=1 00:22:52.998 --rc geninfo_unexecuted_blocks=1 00:22:52.998 00:22:52.998 ' 00:22:52.998 15:09:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:52.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.998 --rc genhtml_branch_coverage=1 00:22:52.998 --rc genhtml_function_coverage=1 00:22:52.998 --rc genhtml_legend=1 00:22:52.998 --rc geninfo_all_blocks=1 00:22:52.998 --rc geninfo_unexecuted_blocks=1 00:22:52.998 00:22:52.998 ' 00:22:52.998 15:09:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:52.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.998 --rc genhtml_branch_coverage=1 00:22:52.998 --rc genhtml_function_coverage=1 00:22:52.998 --rc genhtml_legend=1 00:22:52.998 --rc geninfo_all_blocks=1 00:22:52.998 --rc geninfo_unexecuted_blocks=1 00:22:52.998 00:22:52.998 ' 00:22:52.998 15:09:16 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:52.998 15:09:16 -- nvmf/common.sh@7 -- # uname -s 00:22:52.998 15:09:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.998 15:09:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.998 15:09:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.998 15:09:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.998 15:09:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.998 15:09:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.998 15:09:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.998 15:09:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.998 15:09:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.998 15:09:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.998 15:09:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:22:52.998 15:09:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:22:52.998 15:09:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.998 15:09:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.998 15:09:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:52.998 15:09:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.998 15:09:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.998 15:09:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.998 15:09:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.998 15:09:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.998 15:09:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.998 15:09:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.998 15:09:16 -- paths/export.sh@5 -- # export PATH 00:22:52.998 15:09:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.998 15:09:16 -- nvmf/common.sh@46 -- # : 0 00:22:52.998 15:09:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:52.998 15:09:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:52.998 15:09:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:52.998 15:09:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.998 15:09:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.998 15:09:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:52.998 15:09:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:52.998 15:09:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:52.998 15:09:16 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:52.998 15:09:16 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:52.998 15:09:16 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:52.998 15:09:16 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:52.998 15:09:16 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:52.998 15:09:16 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:52.999 15:09:16 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:52.999 15:09:16 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:52.999 15:09:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:52.999 15:09:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.999 15:09:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:52.999 15:09:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:52.999 15:09:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:52.999 15:09:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.999 15:09:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.999 15:09:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.999 15:09:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:52.999 15:09:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:52.999 15:09:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:52.999 15:09:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:52.999 15:09:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:52.999 15:09:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:52.999 15:09:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.999 15:09:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.999 15:09:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:52.999 15:09:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:52.999 15:09:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.999 15:09:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.999 15:09:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.999 15:09:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.999 15:09:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.999 15:09:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.999 15:09:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.999 15:09:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.999 15:09:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:52.999 15:09:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:52.999 Cannot find device "nvmf_tgt_br" 00:22:52.999 15:09:16 -- nvmf/common.sh@154 -- # true 00:22:52.999 15:09:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.999 Cannot find device "nvmf_tgt_br2" 00:22:52.999 15:09:16 -- nvmf/common.sh@155 -- # true 00:22:52.999 15:09:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:52.999 15:09:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:53.260 Cannot find device "nvmf_tgt_br" 00:22:53.260 15:09:16 -- nvmf/common.sh@157 -- # true 00:22:53.260 15:09:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:53.260 Cannot find device "nvmf_tgt_br2" 00:22:53.260 15:09:16 -- nvmf/common.sh@158 -- # true 00:22:53.260 15:09:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:53.260 15:09:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:53.260 15:09:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:53.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.260 15:09:16 -- nvmf/common.sh@161 -- # true 00:22:53.260 15:09:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:53.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:53.260 15:09:16 -- nvmf/common.sh@162 -- # true 00:22:53.260 15:09:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:53.260 15:09:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:53.260 15:09:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:53.260 15:09:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:53.260 15:09:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:53.260 15:09:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:53.260 15:09:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:53.260 15:09:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:53.260 15:09:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:53.260 15:09:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:53.260 15:09:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:53.260 15:09:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:53.260 15:09:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:53.260 15:09:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:53.260 15:09:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:53.260 15:09:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:53.260 15:09:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:53.260 15:09:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:53.260 15:09:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:53.260 15:09:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:53.260 15:09:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:53.260 15:09:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:53.260 15:09:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:53.260 15:09:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:53.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:22:53.260 00:22:53.260 --- 10.0.0.2 ping statistics --- 00:22:53.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.260 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:53.260 15:09:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:53.260 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:53.260 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:22:53.260 00:22:53.260 --- 10.0.0.3 ping statistics --- 00:22:53.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.260 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:53.260 15:09:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:53.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:53.260 00:22:53.260 --- 10.0.0.1 ping statistics --- 00:22:53.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.260 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:53.260 15:09:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.260 15:09:16 -- nvmf/common.sh@421 -- # return 0 00:22:53.260 15:09:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:53.260 15:09:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.260 15:09:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:53.260 15:09:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:53.260 15:09:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.260 15:09:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:53.260 15:09:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:53.520 15:09:16 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:53.520 15:09:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:53.520 15:09:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:53.520 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:22:53.520 15:09:16 -- nvmf/common.sh@469 -- # nvmfpid=98459 00:22:53.520 15:09:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:53.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.520 15:09:16 -- nvmf/common.sh@470 -- # waitforlisten 98459 00:22:53.520 15:09:16 -- common/autotest_common.sh@829 -- # '[' -z 98459 ']' 00:22:53.520 15:09:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.520 15:09:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.520 15:09:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.520 15:09:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.520 15:09:16 -- common/autotest_common.sh@10 -- # set +x 00:22:53.520 [2024-11-18 15:09:16.919547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:53.520 [2024-11-18 15:09:16.919827] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.520 [2024-11-18 15:09:17.059143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.779 [2024-11-18 15:09:17.133474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:53.779 [2024-11-18 15:09:17.133643] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.779 [2024-11-18 15:09:17.133656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.779 [2024-11-18 15:09:17.133664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.779 [2024-11-18 15:09:17.133698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.346 15:09:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.346 15:09:17 -- common/autotest_common.sh@862 -- # return 0 00:22:54.346 15:09:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:54.346 15:09:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:54.346 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.346 15:09:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.346 15:09:17 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:54.346 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.346 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.346 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.346 15:09:17 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:54.346 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.346 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.605 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.605 15:09:17 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.605 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.605 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.606 [2024-11-18 15:09:17.946363] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.606 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:54.606 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.606 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.606 [2024-11-18 15:09:17.954523] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:54.606 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:54.606 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.606 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.606 null0 00:22:54.606 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:54.606 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.606 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.606 null1 00:22:54.606 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:54.606 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.606 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.606 null2 00:22:54.606 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:54.606 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.606 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.606 null3 00:22:54.606 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:54.606 15:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.606 15:09:17 -- common/autotest_common.sh@10 -- # set +x 00:22:54.606 15:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@47 -- # hostpid=98513 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:54.606 15:09:17 -- host/mdns_discovery.sh@48 -- # waitforlisten 98513 /tmp/host.sock 00:22:54.606 15:09:17 -- common/autotest_common.sh@829 -- # '[' -z 98513 ']' 00:22:54.606 15:09:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:54.606 15:09:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.606 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:54.606 15:09:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:54.606 15:09:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.606 15:09:18 -- common/autotest_common.sh@10 -- # set +x 00:22:54.606 [2024-11-18 15:09:18.055223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:54.606 [2024-11-18 15:09:18.055302] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98513 ] 00:22:54.865 [2024-11-18 15:09:18.197005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.865 [2024-11-18 15:09:18.262617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:54.865 [2024-11-18 15:09:18.262826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.813 15:09:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.813 15:09:19 -- common/autotest_common.sh@862 -- # return 0 00:22:55.813 15:09:19 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:55.813 15:09:19 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:55.813 15:09:19 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:55.813 15:09:19 -- host/mdns_discovery.sh@57 -- # avahipid=98544 00:22:55.813 15:09:19 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:55.813 15:09:19 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:55.813 15:09:19 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:55.813 Process 1058 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:55.813 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:55.813 Successfully dropped root privileges. 00:22:55.813 avahi-daemon 0.8 starting up. 00:22:55.813 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:55.813 Successfully called chroot(). 00:22:55.813 Successfully dropped remaining capabilities. 00:22:55.813 No service file found in /etc/avahi/services. 00:22:56.750 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:56.750 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:56.750 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:56.750 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:56.750 Network interface enumeration completed. 00:22:56.750 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:22:56.750 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:56.750 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:22:56.750 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:56.750 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 345424780. 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:56.750 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.750 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:56.750 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:56.750 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.750 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:56.750 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@68 -- # xargs 00:22:56.750 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:56.750 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@68 -- # sort 00:22:56.750 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@64 -- # sort 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@64 -- # xargs 00:22:56.750 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.750 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:56.750 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:56.750 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.750 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:56.750 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@68 -- # sort 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:56.750 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.750 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:56.750 15:09:20 -- host/mdns_discovery.sh@68 -- # xargs 00:22:56.750 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@64 -- # sort 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@64 -- # xargs 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@68 -- # sort 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@68 -- # xargs 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 [2024-11-18 15:09:20.476652] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@64 -- # xargs 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@64 -- # sort 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 [2024-11-18 15:09:20.543065] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 [2024-11-18 15:09:20.582986] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:57.010 15:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 [2024-11-18 15:09:20.590988] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:57.010 15:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 15:09:20 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98596 00:22:57.269 15:09:20 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:57.269 15:09:20 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:57.837 [2024-11-18 15:09:21.376653] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:58.096 Established under name 'CDC' 00:22:58.355 [2024-11-18 15:09:21.776664] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:58.355 [2024-11-18 15:09:21.776685] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:22:58.355 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:58.355 cookie is 0 00:22:58.355 is_local: 1 00:22:58.355 our_own: 0 00:22:58.355 wide_area: 0 00:22:58.355 multicast: 1 00:22:58.355 cached: 1 00:22:58.355 [2024-11-18 15:09:21.876659] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:58.355 [2024-11-18 15:09:21.876680] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:22:58.355 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:58.355 cookie is 0 00:22:58.355 is_local: 1 00:22:58.355 our_own: 0 00:22:58.355 wide_area: 0 00:22:58.355 multicast: 1 00:22:58.355 cached: 1 00:22:59.291 [2024-11-18 15:09:22.790324] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:59.291 [2024-11-18 15:09:22.790349] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:59.291 [2024-11-18 15:09:22.790365] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:59.291 [2024-11-18 15:09:22.876419] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:59.551 [2024-11-18 15:09:22.880238] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:59.551 [2024-11-18 15:09:22.880256] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:59.551 [2024-11-18 15:09:22.880269] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.551 [2024-11-18 15:09:22.932562] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:59.551 [2024-11-18 15:09:22.932586] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:59.551 [2024-11-18 15:09:22.967687] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:59.551 [2024-11-18 15:09:23.030177] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:59.551 [2024-11-18 15:09:23.030200] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:02.085 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@80 -- # sort 00:23:02.085 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@80 -- # xargs 00:23:02.085 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:02.085 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.085 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@76 -- # sort 00:23:02.085 15:09:25 -- host/mdns_discovery.sh@76 -- # xargs 00:23:02.344 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:02.344 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@68 -- # sort 00:23:02.344 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@68 -- # xargs 00:23:02.344 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.344 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.344 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@64 -- # sort 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@64 -- # xargs 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:02.344 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.344 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@72 -- # xargs 00:23:02.344 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.344 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@72 -- # xargs 00:23:02.344 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.344 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.344 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:02.344 15:09:25 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:02.344 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.344 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.603 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.603 15:09:25 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:02.603 15:09:25 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:02.603 15:09:25 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:02.603 15:09:25 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:02.603 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.603 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.603 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.603 15:09:25 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:02.603 15:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.603 15:09:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.603 15:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.603 15:09:25 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:03.540 15:09:26 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:03.540 15:09:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.540 15:09:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.540 15:09:26 -- common/autotest_common.sh@10 -- # set +x 00:23:03.540 15:09:26 -- host/mdns_discovery.sh@64 -- # sort 00:23:03.540 15:09:26 -- host/mdns_discovery.sh@64 -- # xargs 00:23:03.540 15:09:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:03.540 15:09:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:03.540 15:09:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:03.540 15:09:27 -- common/autotest_common.sh@10 -- # set +x 00:23:03.540 15:09:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:03.540 15:09:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.540 15:09:27 -- common/autotest_common.sh@10 -- # set +x 00:23:03.540 [2024-11-18 15:09:27.097460] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:03.540 [2024-11-18 15:09:27.097928] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:03.540 [2024-11-18 15:09:27.097956] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.540 [2024-11-18 15:09:27.097985] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:03.540 [2024-11-18 15:09:27.097996] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:03.540 15:09:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:03.540 15:09:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.540 15:09:27 -- common/autotest_common.sh@10 -- # set +x 00:23:03.540 [2024-11-18 15:09:27.105393] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:03.540 [2024-11-18 15:09:27.105940] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:03.540 [2024-11-18 15:09:27.106007] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:03.540 15:09:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.540 15:09:27 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:03.799 [2024-11-18 15:09:27.237016] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:03.800 [2024-11-18 15:09:27.237169] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:03.800 [2024-11-18 15:09:27.294227] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:03.800 [2024-11-18 15:09:27.294248] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:03.800 [2024-11-18 15:09:27.294254] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.800 [2024-11-18 15:09:27.294269] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.800 [2024-11-18 15:09:27.294326] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:03.800 [2024-11-18 15:09:27.294336] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:03.800 [2024-11-18 15:09:27.294340] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:03.800 [2024-11-18 15:09:27.294352] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:03.800 [2024-11-18 15:09:27.340119] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:03.800 [2024-11-18 15:09:27.340137] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.800 [2024-11-18 15:09:27.340172] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:03.800 [2024-11-18 15:09:27.340180] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.737 15:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:04.737 15:09:28 -- common/autotest_common.sh@10 -- # set +x 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@68 -- # sort 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@68 -- # xargs 00:23:04.737 15:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.737 15:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.737 15:09:28 -- common/autotest_common.sh@10 -- # set +x 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@64 -- # sort 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@64 -- # xargs 00:23:04.737 15:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:04.737 15:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.737 15:09:28 -- common/autotest_common.sh@10 -- # set +x 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@72 -- # xargs 00:23:04.737 15:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:04.737 15:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.737 15:09:28 -- common/autotest_common.sh@10 -- # set +x 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:04.737 15:09:28 -- host/mdns_discovery.sh@72 -- # xargs 00:23:04.737 15:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.998 15:09:28 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:04.998 15:09:28 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:04.998 15:09:28 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:04.998 15:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.998 15:09:28 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:04.998 15:09:28 -- common/autotest_common.sh@10 -- # set +x 00:23:04.998 15:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.998 15:09:28 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:04.998 15:09:28 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:04.998 15:09:28 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:04.998 15:09:28 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:04.998 15:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.998 15:09:28 -- common/autotest_common.sh@10 -- # set +x 00:23:04.998 [2024-11-18 15:09:28.411102] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:04.998 [2024-11-18 15:09:28.411128] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:04.998 [2024-11-18 15:09:28.411155] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:04.999 [2024-11-18 15:09:28.411166] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:04.999 15:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.999 15:09:28 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:04.999 15:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.999 15:09:28 -- common/autotest_common.sh@10 -- # set +x 00:23:04.999 [2024-11-18 15:09:28.418120] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:04.999 [2024-11-18 15:09:28.418181] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:04.999 [2024-11-18 15:09:28.419886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.999 [2024-11-18 15:09:28.419917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.999 [2024-11-18 15:09:28.419928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.999 [2024-11-18 15:09:28.419936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.999 [2024-11-18 15:09:28.419944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.999 [2024-11-18 15:09:28.419951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.999 [2024-11-18 15:09:28.419960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.999 [2024-11-18 15:09:28.419968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.999 [2024-11-18 15:09:28.419976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:04.999 [2024-11-18 15:09:28.422038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.999 [2024-11-18 15:09:28.422065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.999 [2024-11-18 15:09:28.422076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.999 [2024-11-18 15:09:28.422085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.999 [2024-11-18 15:09:28.422094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.999 [2024-11-18 15:09:28.422102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.999 [2024-11-18 15:09:28.422111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.999 [2024-11-18 15:09:28.422134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.999 [2024-11-18 15:09:28.422143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:04.999 15:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.999 15:09:28 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:04.999 [2024-11-18 15:09:28.429851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:04.999 [2024-11-18 15:09:28.432004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:04.999 [2024-11-18 15:09:28.439866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.999 [2024-11-18 15:09:28.439949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.439991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.440005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:04.999 [2024-11-18 15:09:28.440014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:04.999 [2024-11-18 15:09:28.440028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:04.999 [2024-11-18 15:09:28.440040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.999 [2024-11-18 15:09:28.440047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.999 [2024-11-18 15:09:28.440055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.999 [2024-11-18 15:09:28.440069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.999 [2024-11-18 15:09:28.442015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:04.999 [2024-11-18 15:09:28.442104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.442160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.442175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:04.999 [2024-11-18 15:09:28.442183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:04.999 [2024-11-18 15:09:28.442197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:04.999 [2024-11-18 15:09:28.442223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:04.999 [2024-11-18 15:09:28.442232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:04.999 [2024-11-18 15:09:28.442240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:04.999 [2024-11-18 15:09:28.442266] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.999 [2024-11-18 15:09:28.449916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.999 [2024-11-18 15:09:28.449986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.450024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.450037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:04.999 [2024-11-18 15:09:28.450046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:04.999 [2024-11-18 15:09:28.450059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:04.999 [2024-11-18 15:09:28.450071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:04.999 [2024-11-18 15:09:28.450077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:04.999 [2024-11-18 15:09:28.450086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:04.999 [2024-11-18 15:09:28.450098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.999 [2024-11-18 15:09:28.452077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:04.999 [2024-11-18 15:09:28.452159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.452199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.452213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:04.999 [2024-11-18 15:09:28.452221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:04.999 [2024-11-18 15:09:28.452235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:04.999 [2024-11-18 15:09:28.452261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:04.999 [2024-11-18 15:09:28.452270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:04.999 [2024-11-18 15:09:28.452278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:04.999 [2024-11-18 15:09:28.452290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.999 [2024-11-18 15:09:28.459961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.999 [2024-11-18 15:09:28.460027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.460065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.999 [2024-11-18 15:09:28.460079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:04.999 [2024-11-18 15:09:28.460088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:04.999 [2024-11-18 15:09:28.460101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.000 [2024-11-18 15:09:28.460112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.000 [2024-11-18 15:09:28.460119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.000 [2024-11-18 15:09:28.460126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.000 [2024-11-18 15:09:28.460138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.000 [2024-11-18 15:09:28.462133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.000 [2024-11-18 15:09:28.462217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.462257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.462271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.000 [2024-11-18 15:09:28.462280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.000 [2024-11-18 15:09:28.462294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.000 [2024-11-18 15:09:28.462319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.000 [2024-11-18 15:09:28.462328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.000 [2024-11-18 15:09:28.462347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.000 [2024-11-18 15:09:28.462361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.000 [2024-11-18 15:09:28.470004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.000 [2024-11-18 15:09:28.470078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.470119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.470132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:05.000 [2024-11-18 15:09:28.470141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:05.000 [2024-11-18 15:09:28.470155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.000 [2024-11-18 15:09:28.470166] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.000 [2024-11-18 15:09:28.470174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.000 [2024-11-18 15:09:28.470181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.000 [2024-11-18 15:09:28.470193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.000 [2024-11-18 15:09:28.472178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.000 [2024-11-18 15:09:28.472245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.472283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.472297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.000 [2024-11-18 15:09:28.472305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.000 [2024-11-18 15:09:28.472346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.000 [2024-11-18 15:09:28.472376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.000 [2024-11-18 15:09:28.472385] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.000 [2024-11-18 15:09:28.472393] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.000 [2024-11-18 15:09:28.472405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.000 [2024-11-18 15:09:28.480049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.000 [2024-11-18 15:09:28.480118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.480157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.480170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:05.000 [2024-11-18 15:09:28.480178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:05.000 [2024-11-18 15:09:28.480192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.000 [2024-11-18 15:09:28.480203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.000 [2024-11-18 15:09:28.480210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.000 [2024-11-18 15:09:28.480218] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.000 [2024-11-18 15:09:28.480229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.000 [2024-11-18 15:09:28.482220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.000 [2024-11-18 15:09:28.482288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.482338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.482353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.000 [2024-11-18 15:09:28.482361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.000 [2024-11-18 15:09:28.482375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.000 [2024-11-18 15:09:28.482402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.000 [2024-11-18 15:09:28.482411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.000 [2024-11-18 15:09:28.482418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.000 [2024-11-18 15:09:28.482430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.000 [2024-11-18 15:09:28.490094] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.000 [2024-11-18 15:09:28.490174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.490212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.490225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:05.000 [2024-11-18 15:09:28.490234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:05.000 [2024-11-18 15:09:28.490247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.000 [2024-11-18 15:09:28.490258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.000 [2024-11-18 15:09:28.490266] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.000 [2024-11-18 15:09:28.490273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.000 [2024-11-18 15:09:28.490285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.000 [2024-11-18 15:09:28.492264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.000 [2024-11-18 15:09:28.492371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.492414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.000 [2024-11-18 15:09:28.492428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.000 [2024-11-18 15:09:28.492436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.000 [2024-11-18 15:09:28.492451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.000 [2024-11-18 15:09:28.492477] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.001 [2024-11-18 15:09:28.492486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.001 [2024-11-18 15:09:28.492494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.001 [2024-11-18 15:09:28.492506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.001 [2024-11-18 15:09:28.500152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.001 [2024-11-18 15:09:28.500220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.500258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.500271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:05.001 [2024-11-18 15:09:28.500279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:05.001 [2024-11-18 15:09:28.500293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.001 [2024-11-18 15:09:28.500304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.001 [2024-11-18 15:09:28.500311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.001 [2024-11-18 15:09:28.500341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.001 [2024-11-18 15:09:28.500355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.001 [2024-11-18 15:09:28.502321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.001 [2024-11-18 15:09:28.502414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.502455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.502470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.001 [2024-11-18 15:09:28.502478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.001 [2024-11-18 15:09:28.502492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.001 [2024-11-18 15:09:28.502518] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.001 [2024-11-18 15:09:28.502527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.001 [2024-11-18 15:09:28.502535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.001 [2024-11-18 15:09:28.502548] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.001 [2024-11-18 15:09:28.510198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.001 [2024-11-18 15:09:28.510287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.510338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.510353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:05.001 [2024-11-18 15:09:28.510362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:05.001 [2024-11-18 15:09:28.510376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.001 [2024-11-18 15:09:28.510389] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.001 [2024-11-18 15:09:28.510396] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.001 [2024-11-18 15:09:28.510404] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.001 [2024-11-18 15:09:28.510416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.001 [2024-11-18 15:09:28.512403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.001 [2024-11-18 15:09:28.512466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.512506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.512520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.001 [2024-11-18 15:09:28.512528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.001 [2024-11-18 15:09:28.512542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.001 [2024-11-18 15:09:28.512568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.001 [2024-11-18 15:09:28.512577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.001 [2024-11-18 15:09:28.512585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.001 [2024-11-18 15:09:28.512613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.001 [2024-11-18 15:09:28.520259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.001 [2024-11-18 15:09:28.520356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.520398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.520413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:05.001 [2024-11-18 15:09:28.520422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:05.001 [2024-11-18 15:09:28.520436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.001 [2024-11-18 15:09:28.520448] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.001 [2024-11-18 15:09:28.520456] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.001 [2024-11-18 15:09:28.520464] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.001 [2024-11-18 15:09:28.520476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.001 [2024-11-18 15:09:28.522440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.001 [2024-11-18 15:09:28.522508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.522546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.522560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.001 [2024-11-18 15:09:28.522568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.001 [2024-11-18 15:09:28.522581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.001 [2024-11-18 15:09:28.522605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.001 [2024-11-18 15:09:28.522614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.001 [2024-11-18 15:09:28.522621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.001 [2024-11-18 15:09:28.522633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.001 [2024-11-18 15:09:28.530321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.001 [2024-11-18 15:09:28.530398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.530439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.530454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:05.001 [2024-11-18 15:09:28.530462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:05.001 [2024-11-18 15:09:28.530476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.001 [2024-11-18 15:09:28.530504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.001 [2024-11-18 15:09:28.530512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.001 [2024-11-18 15:09:28.530520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.001 [2024-11-18 15:09:28.530533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.001 [2024-11-18 15:09:28.532484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.001 [2024-11-18 15:09:28.532548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.532587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.001 [2024-11-18 15:09:28.532601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.001 [2024-11-18 15:09:28.532609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.002 [2024-11-18 15:09:28.532622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.002 [2024-11-18 15:09:28.532646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.002 [2024-11-18 15:09:28.532655] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.002 [2024-11-18 15:09:28.532662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.002 [2024-11-18 15:09:28.532674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.002 [2024-11-18 15:09:28.540373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:05.002 [2024-11-18 15:09:28.540440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.002 [2024-11-18 15:09:28.540478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.002 [2024-11-18 15:09:28.540491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b0aa0 with addr=10.0.0.2, port=4420 00:23:05.002 [2024-11-18 15:09:28.540500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0aa0 is same with the state(5) to be set 00:23:05.002 [2024-11-18 15:09:28.540514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0aa0 (9): Bad file descriptor 00:23:05.002 [2024-11-18 15:09:28.540525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:05.002 [2024-11-18 15:09:28.540532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:05.002 [2024-11-18 15:09:28.540539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:05.002 [2024-11-18 15:09:28.540551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.002 [2024-11-18 15:09:28.542525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:05.002 [2024-11-18 15:09:28.542590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.002 [2024-11-18 15:09:28.542629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.002 [2024-11-18 15:09:28.542642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139b760 with addr=10.0.0.3, port=4420 00:23:05.002 [2024-11-18 15:09:28.542651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139b760 is same with the state(5) to be set 00:23:05.002 [2024-11-18 15:09:28.542664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139b760 (9): Bad file descriptor 00:23:05.002 [2024-11-18 15:09:28.542688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:05.002 [2024-11-18 15:09:28.542697] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:05.002 [2024-11-18 15:09:28.542704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:05.002 [2024-11-18 15:09:28.542716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.002 [2024-11-18 15:09:28.549687] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:05.002 [2024-11-18 15:09:28.549711] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:05.002 [2024-11-18 15:09:28.549727] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:05.002 [2024-11-18 15:09:28.549754] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:05.002 [2024-11-18 15:09:28.549767] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:05.002 [2024-11-18 15:09:28.549778] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:05.261 [2024-11-18 15:09:28.635756] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:05.261 [2024-11-18 15:09:28.635804] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:06.197 15:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.197 15:09:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@68 -- # xargs 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@68 -- # sort 00:23:06.197 15:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@64 -- # sort 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:06.197 15:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.197 15:09:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@64 -- # xargs 00:23:06.197 15:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@72 -- # xargs 00:23:06.197 15:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.197 15:09:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.197 15:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:06.197 15:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.197 15:09:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@72 -- # xargs 00:23:06.197 15:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:06.197 15:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.197 15:09:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.197 15:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:06.197 15:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.197 15:09:29 -- common/autotest_common.sh@10 -- # set +x 00:23:06.197 15:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.197 15:09:29 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:06.197 [2024-11-18 15:09:29.776711] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:07.135 15:09:30 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:07.135 15:09:30 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:07.135 15:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.135 15:09:30 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:07.135 15:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.135 15:09:30 -- host/mdns_discovery.sh@80 -- # sort 00:23:07.135 15:09:30 -- host/mdns_discovery.sh@80 -- # xargs 00:23:07.395 15:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@68 -- # sort 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@68 -- # xargs 00:23:07.395 15:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.395 15:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.395 15:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@64 -- # sort 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@64 -- # xargs 00:23:07.395 15:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.395 15:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.395 15:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:07.395 15:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.395 15:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.395 15:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:07.395 15:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.395 15:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.395 15:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:07.395 15:09:30 -- common/autotest_common.sh@650 -- # local es=0 00:23:07.395 15:09:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:07.395 15:09:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:07.395 15:09:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.395 15:09:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:07.395 15:09:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.395 15:09:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:07.395 15:09:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.395 15:09:30 -- common/autotest_common.sh@10 -- # set +x 00:23:07.395 [2024-11-18 15:09:30.950023] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:07.395 2024/11/18 15:09:30 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:07.395 request: 00:23:07.395 { 00:23:07.395 "method": "bdev_nvme_start_mdns_discovery", 00:23:07.395 "params": { 00:23:07.395 "name": "mdns", 00:23:07.395 "svcname": "_nvme-disc._http", 00:23:07.395 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:07.395 } 00:23:07.395 } 00:23:07.395 Got JSON-RPC error response 00:23:07.395 GoRPCClient: error on JSON-RPC call 00:23:07.395 15:09:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:07.395 15:09:30 -- common/autotest_common.sh@653 -- # es=1 00:23:07.395 15:09:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.395 15:09:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.395 15:09:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.395 15:09:30 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:07.963 [2024-11-18 15:09:31.338692] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:07.963 [2024-11-18 15:09:31.438689] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:07.963 [2024-11-18 15:09:31.538718] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:07.963 [2024-11-18 15:09:31.538736] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:07.963 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:07.963 cookie is 0 00:23:07.963 is_local: 1 00:23:07.963 our_own: 0 00:23:07.963 wide_area: 0 00:23:07.963 multicast: 1 00:23:07.963 cached: 1 00:23:08.222 [2024-11-18 15:09:31.638703] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:08.222 [2024-11-18 15:09:31.638724] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:08.222 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:08.222 cookie is 0 00:23:08.222 is_local: 1 00:23:08.222 our_own: 0 00:23:08.222 wide_area: 0 00:23:08.222 multicast: 1 00:23:08.222 cached: 1 00:23:09.158 [2024-11-18 15:09:32.551655] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:09.159 [2024-11-18 15:09:32.551676] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:09.159 [2024-11-18 15:09:32.551690] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:09.159 [2024-11-18 15:09:32.637738] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:09.159 [2024-11-18 15:09:32.651441] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:09.159 [2024-11-18 15:09:32.651460] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:09.159 [2024-11-18 15:09:32.651474] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:09.159 [2024-11-18 15:09:32.707131] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:09.159 [2024-11-18 15:09:32.707156] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:09.159 [2024-11-18 15:09:32.738269] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:09.417 [2024-11-18 15:09:32.796778] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:09.417 [2024-11-18 15:09:32.796931] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:12.755 15:09:35 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:12.755 15:09:35 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:12.755 15:09:35 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:12.755 15:09:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.755 15:09:35 -- common/autotest_common.sh@10 -- # set +x 00:23:12.755 15:09:35 -- host/mdns_discovery.sh@80 -- # sort 00:23:12.755 15:09:35 -- host/mdns_discovery.sh@80 -- # xargs 00:23:12.755 15:09:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:12.755 15:09:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.755 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@76 -- # sort 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@76 -- # xargs 00:23:12.755 15:09:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:12.755 15:09:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.755 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@64 -- # xargs 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@64 -- # sort 00:23:12.755 15:09:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:12.755 15:09:36 -- common/autotest_common.sh@650 -- # local es=0 00:23:12.755 15:09:36 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:12.755 15:09:36 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:12.755 15:09:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.755 15:09:36 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:12.755 15:09:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.755 15:09:36 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:12.755 15:09:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.755 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:12.755 [2024-11-18 15:09:36.136485] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:12.755 2024/11/18 15:09:36 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:12.755 request: 00:23:12.755 { 00:23:12.755 "method": "bdev_nvme_start_mdns_discovery", 00:23:12.755 "params": { 00:23:12.755 "name": "cdc", 00:23:12.755 "svcname": "_nvme-disc._tcp", 00:23:12.755 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:12.755 } 00:23:12.755 } 00:23:12.755 Got JSON-RPC error response 00:23:12.755 GoRPCClient: error on JSON-RPC call 00:23:12.755 15:09:36 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:12.755 15:09:36 -- common/autotest_common.sh@653 -- # es=1 00:23:12.755 15:09:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.755 15:09:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.755 15:09:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:12.755 15:09:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.755 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@76 -- # sort 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@76 -- # xargs 00:23:12.755 15:09:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:12.755 15:09:36 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.755 15:09:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.755 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:12.756 15:09:36 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:12.756 15:09:36 -- host/mdns_discovery.sh@64 -- # sort 00:23:12.756 15:09:36 -- host/mdns_discovery.sh@64 -- # xargs 00:23:12.756 15:09:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.756 15:09:36 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:12.756 15:09:36 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:12.756 15:09:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.756 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:12.756 15:09:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.756 15:09:36 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:12.756 15:09:36 -- host/mdns_discovery.sh@197 -- # kill 98513 00:23:12.756 15:09:36 -- host/mdns_discovery.sh@200 -- # wait 98513 00:23:13.014 [2024-11-18 15:09:36.354284] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:13.014 15:09:36 -- host/mdns_discovery.sh@201 -- # kill 98596 00:23:13.014 Got SIGTERM, quitting. 00:23:13.014 15:09:36 -- host/mdns_discovery.sh@202 -- # kill 98544 00:23:13.014 15:09:36 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:13.014 15:09:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:13.014 15:09:36 -- nvmf/common.sh@116 -- # sync 00:23:13.014 Got SIGTERM, quitting. 00:23:13.014 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:13.014 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:13.014 avahi-daemon 0.8 exiting. 00:23:13.014 15:09:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:13.014 15:09:36 -- nvmf/common.sh@119 -- # set +e 00:23:13.014 15:09:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:13.014 15:09:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:13.014 rmmod nvme_tcp 00:23:13.014 rmmod nvme_fabrics 00:23:13.014 rmmod nvme_keyring 00:23:13.014 15:09:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:13.014 15:09:36 -- nvmf/common.sh@123 -- # set -e 00:23:13.014 15:09:36 -- nvmf/common.sh@124 -- # return 0 00:23:13.014 15:09:36 -- nvmf/common.sh@477 -- # '[' -n 98459 ']' 00:23:13.014 15:09:36 -- nvmf/common.sh@478 -- # killprocess 98459 00:23:13.014 15:09:36 -- common/autotest_common.sh@936 -- # '[' -z 98459 ']' 00:23:13.014 15:09:36 -- common/autotest_common.sh@940 -- # kill -0 98459 00:23:13.014 15:09:36 -- common/autotest_common.sh@941 -- # uname 00:23:13.014 15:09:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.014 15:09:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98459 00:23:13.014 15:09:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:13.014 15:09:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:13.014 killing process with pid 98459 00:23:13.014 15:09:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98459' 00:23:13.014 15:09:36 -- common/autotest_common.sh@955 -- # kill 98459 00:23:13.014 15:09:36 -- common/autotest_common.sh@960 -- # wait 98459 00:23:13.273 15:09:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:13.273 15:09:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:13.273 15:09:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:13.273 15:09:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.273 15:09:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:13.273 15:09:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.273 15:09:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.274 15:09:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.533 15:09:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:13.533 ************************************ 00:23:13.533 END TEST nvmf_mdns_discovery 00:23:13.533 ************************************ 00:23:13.533 00:23:13.533 real 0m20.549s 00:23:13.533 user 0m40.125s 00:23:13.533 sys 0m2.012s 00:23:13.533 15:09:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:13.533 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:13.533 15:09:36 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:13.533 15:09:36 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:13.533 15:09:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:13.533 15:09:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:13.533 15:09:36 -- common/autotest_common.sh@10 -- # set +x 00:23:13.533 ************************************ 00:23:13.533 START TEST nvmf_multipath 00:23:13.533 ************************************ 00:23:13.533 15:09:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:13.533 * Looking for test storage... 00:23:13.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:13.533 15:09:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:13.533 15:09:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:13.533 15:09:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:13.533 15:09:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:13.533 15:09:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:13.533 15:09:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:13.533 15:09:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:13.533 15:09:37 -- scripts/common.sh@335 -- # IFS=.-: 00:23:13.533 15:09:37 -- scripts/common.sh@335 -- # read -ra ver1 00:23:13.533 15:09:37 -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.533 15:09:37 -- scripts/common.sh@336 -- # read -ra ver2 00:23:13.533 15:09:37 -- scripts/common.sh@337 -- # local 'op=<' 00:23:13.533 15:09:37 -- scripts/common.sh@339 -- # ver1_l=2 00:23:13.533 15:09:37 -- scripts/common.sh@340 -- # ver2_l=1 00:23:13.533 15:09:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:13.533 15:09:37 -- scripts/common.sh@343 -- # case "$op" in 00:23:13.533 15:09:37 -- scripts/common.sh@344 -- # : 1 00:23:13.533 15:09:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:13.533 15:09:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.533 15:09:37 -- scripts/common.sh@364 -- # decimal 1 00:23:13.533 15:09:37 -- scripts/common.sh@352 -- # local d=1 00:23:13.533 15:09:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.533 15:09:37 -- scripts/common.sh@354 -- # echo 1 00:23:13.533 15:09:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:13.533 15:09:37 -- scripts/common.sh@365 -- # decimal 2 00:23:13.533 15:09:37 -- scripts/common.sh@352 -- # local d=2 00:23:13.533 15:09:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.533 15:09:37 -- scripts/common.sh@354 -- # echo 2 00:23:13.533 15:09:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:13.533 15:09:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:13.533 15:09:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:13.533 15:09:37 -- scripts/common.sh@367 -- # return 0 00:23:13.533 15:09:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.533 15:09:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:13.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.533 --rc genhtml_branch_coverage=1 00:23:13.533 --rc genhtml_function_coverage=1 00:23:13.533 --rc genhtml_legend=1 00:23:13.533 --rc geninfo_all_blocks=1 00:23:13.533 --rc geninfo_unexecuted_blocks=1 00:23:13.533 00:23:13.533 ' 00:23:13.533 15:09:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:13.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.533 --rc genhtml_branch_coverage=1 00:23:13.533 --rc genhtml_function_coverage=1 00:23:13.533 --rc genhtml_legend=1 00:23:13.533 --rc geninfo_all_blocks=1 00:23:13.533 --rc geninfo_unexecuted_blocks=1 00:23:13.533 00:23:13.533 ' 00:23:13.533 15:09:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:13.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.533 --rc genhtml_branch_coverage=1 00:23:13.533 --rc genhtml_function_coverage=1 00:23:13.533 --rc genhtml_legend=1 00:23:13.533 --rc geninfo_all_blocks=1 00:23:13.533 --rc geninfo_unexecuted_blocks=1 00:23:13.533 00:23:13.533 ' 00:23:13.533 15:09:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:13.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.533 --rc genhtml_branch_coverage=1 00:23:13.533 --rc genhtml_function_coverage=1 00:23:13.533 --rc genhtml_legend=1 00:23:13.533 --rc geninfo_all_blocks=1 00:23:13.533 --rc geninfo_unexecuted_blocks=1 00:23:13.533 00:23:13.533 ' 00:23:13.533 15:09:37 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:13.533 15:09:37 -- nvmf/common.sh@7 -- # uname -s 00:23:13.793 15:09:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.793 15:09:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.793 15:09:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.793 15:09:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.793 15:09:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.793 15:09:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.793 15:09:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.793 15:09:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.793 15:09:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.793 15:09:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.793 15:09:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:23:13.793 15:09:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:23:13.793 15:09:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.793 15:09:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.793 15:09:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:13.793 15:09:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:13.793 15:09:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.793 15:09:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.793 15:09:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.793 15:09:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.793 15:09:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.793 15:09:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.793 15:09:37 -- paths/export.sh@5 -- # export PATH 00:23:13.793 15:09:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.793 15:09:37 -- nvmf/common.sh@46 -- # : 0 00:23:13.793 15:09:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:13.793 15:09:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:13.793 15:09:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:13.793 15:09:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.793 15:09:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.793 15:09:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:13.793 15:09:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:13.793 15:09:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:13.793 15:09:37 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:13.793 15:09:37 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:13.793 15:09:37 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:13.793 15:09:37 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:13.793 15:09:37 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.793 15:09:37 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:13.793 15:09:37 -- host/multipath.sh@30 -- # nvmftestinit 00:23:13.793 15:09:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:13.793 15:09:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.793 15:09:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:13.793 15:09:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:13.793 15:09:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:13.793 15:09:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.793 15:09:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.793 15:09:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.793 15:09:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:13.793 15:09:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:13.793 15:09:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:13.793 15:09:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:13.793 15:09:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:13.793 15:09:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:13.793 15:09:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.793 15:09:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.793 15:09:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:13.793 15:09:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:13.793 15:09:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:13.793 15:09:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:13.793 15:09:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:13.793 15:09:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.793 15:09:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:13.793 15:09:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:13.793 15:09:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:13.793 15:09:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:13.793 15:09:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:13.793 15:09:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:13.793 Cannot find device "nvmf_tgt_br" 00:23:13.793 15:09:37 -- nvmf/common.sh@154 -- # true 00:23:13.793 15:09:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:13.793 Cannot find device "nvmf_tgt_br2" 00:23:13.793 15:09:37 -- nvmf/common.sh@155 -- # true 00:23:13.793 15:09:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:13.793 15:09:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:13.793 Cannot find device "nvmf_tgt_br" 00:23:13.793 15:09:37 -- nvmf/common.sh@157 -- # true 00:23:13.793 15:09:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:13.793 Cannot find device "nvmf_tgt_br2" 00:23:13.793 15:09:37 -- nvmf/common.sh@158 -- # true 00:23:13.793 15:09:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:13.794 15:09:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:13.794 15:09:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:13.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.794 15:09:37 -- nvmf/common.sh@161 -- # true 00:23:13.794 15:09:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:13.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.794 15:09:37 -- nvmf/common.sh@162 -- # true 00:23:13.794 15:09:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:13.794 15:09:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:13.794 15:09:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:13.794 15:09:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:13.794 15:09:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:13.794 15:09:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:13.794 15:09:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:13.794 15:09:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:13.794 15:09:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:13.794 15:09:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:14.053 15:09:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:14.053 15:09:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:14.053 15:09:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:14.053 15:09:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:14.053 15:09:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:14.053 15:09:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:14.053 15:09:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:14.053 15:09:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:14.053 15:09:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:14.053 15:09:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:14.053 15:09:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:14.053 15:09:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:14.053 15:09:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:14.053 15:09:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:14.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:23:14.053 00:23:14.053 --- 10.0.0.2 ping statistics --- 00:23:14.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.053 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:14.053 15:09:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:14.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:14.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:23:14.053 00:23:14.053 --- 10.0.0.3 ping statistics --- 00:23:14.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.053 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:14.053 15:09:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:14.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:14.053 00:23:14.053 --- 10.0.0.1 ping statistics --- 00:23:14.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.053 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:14.053 15:09:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.053 15:09:37 -- nvmf/common.sh@421 -- # return 0 00:23:14.053 15:09:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:14.053 15:09:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.053 15:09:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:14.053 15:09:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:14.053 15:09:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.053 15:09:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:14.053 15:09:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:14.053 15:09:37 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:14.053 15:09:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:14.053 15:09:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.053 15:09:37 -- common/autotest_common.sh@10 -- # set +x 00:23:14.053 15:09:37 -- nvmf/common.sh@469 -- # nvmfpid=99115 00:23:14.053 15:09:37 -- nvmf/common.sh@470 -- # waitforlisten 99115 00:23:14.053 15:09:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:14.053 15:09:37 -- common/autotest_common.sh@829 -- # '[' -z 99115 ']' 00:23:14.053 15:09:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.053 15:09:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.053 15:09:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.053 15:09:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.053 15:09:37 -- common/autotest_common.sh@10 -- # set +x 00:23:14.053 [2024-11-18 15:09:37.562259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:14.053 [2024-11-18 15:09:37.562377] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.312 [2024-11-18 15:09:37.695237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:14.312 [2024-11-18 15:09:37.755245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:14.312 [2024-11-18 15:09:37.755426] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.312 [2024-11-18 15:09:37.755440] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.312 [2024-11-18 15:09:37.755448] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.312 [2024-11-18 15:09:37.755916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.312 [2024-11-18 15:09:37.755967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.249 15:09:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.249 15:09:38 -- common/autotest_common.sh@862 -- # return 0 00:23:15.249 15:09:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:15.249 15:09:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.249 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:23:15.249 15:09:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.249 15:09:38 -- host/multipath.sh@33 -- # nvmfapp_pid=99115 00:23:15.249 15:09:38 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:15.249 [2024-11-18 15:09:38.776808] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.249 15:09:38 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:15.507 Malloc0 00:23:15.507 15:09:39 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:15.766 15:09:39 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:16.024 15:09:39 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.282 [2024-11-18 15:09:39.688734] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.282 15:09:39 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:16.541 [2024-11-18 15:09:39.884968] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.541 15:09:39 -- host/multipath.sh@44 -- # bdevperf_pid=99213 00:23:16.541 15:09:39 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:16.541 15:09:39 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.541 15:09:39 -- host/multipath.sh@47 -- # waitforlisten 99213 /var/tmp/bdevperf.sock 00:23:16.541 15:09:39 -- common/autotest_common.sh@829 -- # '[' -z 99213 ']' 00:23:16.541 15:09:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.541 15:09:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.541 15:09:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.541 15:09:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.541 15:09:39 -- common/autotest_common.sh@10 -- # set +x 00:23:17.477 15:09:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.477 15:09:40 -- common/autotest_common.sh@862 -- # return 0 00:23:17.477 15:09:40 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:17.737 15:09:41 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:17.996 Nvme0n1 00:23:17.996 15:09:41 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:18.255 Nvme0n1 00:23:18.255 15:09:41 -- host/multipath.sh@78 -- # sleep 1 00:23:18.255 15:09:41 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:19.190 15:09:42 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:19.190 15:09:42 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:19.449 15:09:43 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:19.708 15:09:43 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:19.708 15:09:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99115 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:19.708 15:09:43 -- host/multipath.sh@65 -- # dtrace_pid=99300 00:23:19.708 15:09:43 -- host/multipath.sh@66 -- # sleep 6 00:23:26.274 15:09:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:26.274 15:09:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:26.274 15:09:49 -- host/multipath.sh@67 -- # active_port=4421 00:23:26.274 15:09:49 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.274 Attaching 4 probes... 00:23:26.274 @path[10.0.0.2, 4421]: 20918 00:23:26.274 @path[10.0.0.2, 4421]: 21590 00:23:26.274 @path[10.0.0.2, 4421]: 21623 00:23:26.274 @path[10.0.0.2, 4421]: 21508 00:23:26.274 @path[10.0.0.2, 4421]: 21710 00:23:26.274 15:09:49 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:26.274 15:09:49 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:26.274 15:09:49 -- host/multipath.sh@69 -- # sed -n 1p 00:23:26.274 15:09:49 -- host/multipath.sh@69 -- # port=4421 00:23:26.274 15:09:49 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.274 15:09:49 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.274 15:09:49 -- host/multipath.sh@72 -- # kill 99300 00:23:26.274 15:09:49 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.274 15:09:49 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:26.274 15:09:49 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:26.274 15:09:49 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:26.533 15:09:49 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:26.533 15:09:49 -- host/multipath.sh@65 -- # dtrace_pid=99432 00:23:26.533 15:09:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99115 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:26.533 15:09:49 -- host/multipath.sh@66 -- # sleep 6 00:23:33.098 15:09:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:33.098 15:09:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:33.098 15:09:56 -- host/multipath.sh@67 -- # active_port=4420 00:23:33.098 15:09:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.098 Attaching 4 probes... 00:23:33.098 @path[10.0.0.2, 4420]: 21482 00:23:33.098 @path[10.0.0.2, 4420]: 21692 00:23:33.098 @path[10.0.0.2, 4420]: 21760 00:23:33.098 @path[10.0.0.2, 4420]: 21585 00:23:33.098 @path[10.0.0.2, 4420]: 21894 00:23:33.098 15:09:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:33.098 15:09:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:33.098 15:09:56 -- host/multipath.sh@69 -- # sed -n 1p 00:23:33.098 15:09:56 -- host/multipath.sh@69 -- # port=4420 00:23:33.098 15:09:56 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:33.098 15:09:56 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:33.098 15:09:56 -- host/multipath.sh@72 -- # kill 99432 00:23:33.098 15:09:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.098 15:09:56 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:33.098 15:09:56 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:33.098 15:09:56 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:33.357 15:09:56 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:33.357 15:09:56 -- host/multipath.sh@65 -- # dtrace_pid=99562 00:23:33.357 15:09:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99115 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:33.357 15:09:56 -- host/multipath.sh@66 -- # sleep 6 00:23:39.921 15:10:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:39.921 15:10:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:39.921 15:10:03 -- host/multipath.sh@67 -- # active_port=4421 00:23:39.921 15:10:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:39.921 Attaching 4 probes... 00:23:39.921 @path[10.0.0.2, 4421]: 17154 00:23:39.921 @path[10.0.0.2, 4421]: 21192 00:23:39.921 @path[10.0.0.2, 4421]: 21086 00:23:39.921 @path[10.0.0.2, 4421]: 21168 00:23:39.921 @path[10.0.0.2, 4421]: 21092 00:23:39.921 15:10:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:39.921 15:10:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:39.921 15:10:03 -- host/multipath.sh@69 -- # sed -n 1p 00:23:39.921 15:10:03 -- host/multipath.sh@69 -- # port=4421 00:23:39.921 15:10:03 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:39.921 15:10:03 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:39.921 15:10:03 -- host/multipath.sh@72 -- # kill 99562 00:23:39.921 15:10:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:39.921 15:10:03 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:39.921 15:10:03 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:39.921 15:10:03 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:40.180 15:10:03 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:40.180 15:10:03 -- host/multipath.sh@65 -- # dtrace_pid=99697 00:23:40.180 15:10:03 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99115 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:40.180 15:10:03 -- host/multipath.sh@66 -- # sleep 6 00:23:46.744 15:10:09 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:46.744 15:10:09 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:46.744 15:10:09 -- host/multipath.sh@67 -- # active_port= 00:23:46.744 15:10:09 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.744 Attaching 4 probes... 00:23:46.744 00:23:46.744 00:23:46.744 00:23:46.744 00:23:46.744 00:23:46.744 15:10:09 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:46.744 15:10:09 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:46.744 15:10:09 -- host/multipath.sh@69 -- # sed -n 1p 00:23:46.744 15:10:09 -- host/multipath.sh@69 -- # port= 00:23:46.744 15:10:09 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:46.744 15:10:09 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:46.744 15:10:09 -- host/multipath.sh@72 -- # kill 99697 00:23:46.744 15:10:09 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.744 15:10:09 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:46.744 15:10:09 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:46.744 15:10:10 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:46.744 15:10:10 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:47.003 15:10:10 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99115 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:47.003 15:10:10 -- host/multipath.sh@65 -- # dtrace_pid=99829 00:23:47.003 15:10:10 -- host/multipath.sh@66 -- # sleep 6 00:23:53.572 15:10:16 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:53.572 15:10:16 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:53.572 15:10:16 -- host/multipath.sh@67 -- # active_port=4421 00:23:53.572 15:10:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.572 Attaching 4 probes... 00:23:53.572 @path[10.0.0.2, 4421]: 20459 00:23:53.572 @path[10.0.0.2, 4421]: 20854 00:23:53.572 @path[10.0.0.2, 4421]: 20807 00:23:53.572 @path[10.0.0.2, 4421]: 20741 00:23:53.572 @path[10.0.0.2, 4421]: 20830 00:23:53.572 15:10:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:53.572 15:10:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:53.572 15:10:16 -- host/multipath.sh@69 -- # sed -n 1p 00:23:53.572 15:10:16 -- host/multipath.sh@69 -- # port=4421 00:23:53.572 15:10:16 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:53.572 15:10:16 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:53.572 15:10:16 -- host/multipath.sh@72 -- # kill 99829 00:23:53.572 15:10:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.572 15:10:16 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:53.572 [2024-11-18 15:10:16.877383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 [2024-11-18 15:10:16.877801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a19370 is same with the state(5) to be set 00:23:53.572 15:10:16 -- host/multipath.sh@101 -- # sleep 1 00:23:54.507 15:10:17 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:54.507 15:10:17 -- host/multipath.sh@65 -- # dtrace_pid=99959 00:23:54.507 15:10:17 -- host/multipath.sh@66 -- # sleep 6 00:23:54.507 15:10:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99115 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:01.070 15:10:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:01.070 15:10:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:01.070 15:10:24 -- host/multipath.sh@67 -- # active_port=4420 00:24:01.070 15:10:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:01.070 Attaching 4 probes... 00:24:01.070 @path[10.0.0.2, 4420]: 20703 00:24:01.070 @path[10.0.0.2, 4420]: 20975 00:24:01.070 @path[10.0.0.2, 4420]: 21266 00:24:01.070 @path[10.0.0.2, 4420]: 20948 00:24:01.070 @path[10.0.0.2, 4420]: 21175 00:24:01.070 15:10:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:01.070 15:10:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:01.070 15:10:24 -- host/multipath.sh@69 -- # sed -n 1p 00:24:01.070 15:10:24 -- host/multipath.sh@69 -- # port=4420 00:24:01.070 15:10:24 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:01.070 15:10:24 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:01.070 15:10:24 -- host/multipath.sh@72 -- # kill 99959 00:24:01.070 15:10:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:01.070 15:10:24 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:01.070 [2024-11-18 15:10:24.380192] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:01.070 15:10:24 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.070 15:10:24 -- host/multipath.sh@111 -- # sleep 6 00:24:07.704 15:10:30 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:07.704 15:10:30 -- host/multipath.sh@65 -- # dtrace_pid=100157 00:24:07.704 15:10:30 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99115 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:07.705 15:10:30 -- host/multipath.sh@66 -- # sleep 6 00:24:14.311 15:10:36 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:14.311 15:10:36 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:14.311 15:10:36 -- host/multipath.sh@67 -- # active_port=4421 00:24:14.311 15:10:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:14.311 Attaching 4 probes... 00:24:14.311 @path[10.0.0.2, 4421]: 20434 00:24:14.311 @path[10.0.0.2, 4421]: 20694 00:24:14.311 @path[10.0.0.2, 4421]: 20849 00:24:14.311 @path[10.0.0.2, 4421]: 20793 00:24:14.311 @path[10.0.0.2, 4421]: 20899 00:24:14.311 15:10:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:14.311 15:10:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:14.311 15:10:36 -- host/multipath.sh@69 -- # sed -n 1p 00:24:14.311 15:10:36 -- host/multipath.sh@69 -- # port=4421 00:24:14.311 15:10:36 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.311 15:10:36 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.311 15:10:36 -- host/multipath.sh@72 -- # kill 100157 00:24:14.311 15:10:36 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:14.311 15:10:36 -- host/multipath.sh@114 -- # killprocess 99213 00:24:14.311 15:10:36 -- common/autotest_common.sh@936 -- # '[' -z 99213 ']' 00:24:14.311 15:10:36 -- common/autotest_common.sh@940 -- # kill -0 99213 00:24:14.311 15:10:36 -- common/autotest_common.sh@941 -- # uname 00:24:14.311 15:10:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:14.312 15:10:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99213 00:24:14.312 killing process with pid 99213 00:24:14.312 15:10:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:14.312 15:10:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:14.312 15:10:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99213' 00:24:14.312 15:10:36 -- common/autotest_common.sh@955 -- # kill 99213 00:24:14.312 15:10:36 -- common/autotest_common.sh@960 -- # wait 99213 00:24:14.312 Connection closed with partial response: 00:24:14.312 00:24:14.312 00:24:14.312 15:10:37 -- host/multipath.sh@116 -- # wait 99213 00:24:14.312 15:10:37 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:14.312 [2024-11-18 15:09:39.957842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:14.312 [2024-11-18 15:09:39.957955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99213 ] 00:24:14.312 [2024-11-18 15:09:40.099653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.312 [2024-11-18 15:09:40.172442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.312 Running I/O for 90 seconds... 00:24:14.312 [2024-11-18 15:09:49.955763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.312 [2024-11-18 15:09:49.955832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.955876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.312 [2024-11-18 15:09:49.955894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.955914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.312 [2024-11-18 15:09:49.955927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.955944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.312 [2024-11-18 15:09:49.955957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.955974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.312 [2024-11-18 15:09:49.955985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.312 [2024-11-18 15:09:49.956134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.312 [2024-11-18 15:09:49.956190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.312 [2024-11-18 15:09:49.956547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.312 [2024-11-18 15:09:49.956578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.312 [2024-11-18 15:09:49.956596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.956610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.313 [2024-11-18 15:09:49.957250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.313 [2024-11-18 15:09:49.957291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.313 [2024-11-18 15:09:49.957442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.313 [2024-11-18 15:09:49.957591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.313 [2024-11-18 15:09:49.957622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.957980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.957992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.958010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.313 [2024-11-18 15:09:49.958023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.958040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.313 [2024-11-18 15:09:49.958054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.958073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.958092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.958112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.958125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.313 [2024-11-18 15:09:49.958143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.313 [2024-11-18 15:09:49.958156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.958903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.958958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.314 [2024-11-18 15:09:49.958971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.959506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.959531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.959555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.314 [2024-11-18 15:09:49.959569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:14.314 [2024-11-18 15:09:49.959588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.959950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.959968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.315 [2024-11-18 15:09:49.959981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.315 [2024-11-18 15:09:49.960013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.315 [2024-11-18 15:09:49.960073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.315 [2024-11-18 15:09:49.960195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.315 [2024-11-18 15:09:49.960356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.315 [2024-11-18 15:09:49.960430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.315 [2024-11-18 15:09:49.960493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.315 [2024-11-18 15:09:49.960524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.315 [2024-11-18 15:09:49.960542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.315 [2024-11-18 15:09:49.960555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.960586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.960617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.960649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.960680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.960734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.960765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.960796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.960828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.960859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.960889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.960925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.960957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.960975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.960987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.961018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.961048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.961078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.961151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.961192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.961225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.961257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.961289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.961321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:49.961365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:49.961393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.316 [2024-11-18 15:09:49.961408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:56.463982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:56.464054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:56.464110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.316 [2024-11-18 15:09:56.464128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.316 [2024-11-18 15:09:56.464148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.464161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.464191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.464728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.464757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.464787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.464818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.464852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.464868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.465211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.465431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.465501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.317 [2024-11-18 15:09:56.465719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.317 [2024-11-18 15:09:56.465744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.317 [2024-11-18 15:09:56.465759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.465785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.465797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.465818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.465830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.465851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.465864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.465884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.465896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.465915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.465928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.465947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.465959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.465979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.465992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.318 [2024-11-18 15:09:56.466497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.318 [2024-11-18 15:09:56.466533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.318 [2024-11-18 15:09:56.466568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.318 [2024-11-18 15:09:56.466602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.466616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.466668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.466716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.466759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.466791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.466823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.466854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.466887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.466920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.466953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.466973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.466985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.467413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.467603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.319 [2024-11-18 15:09:56.467777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.319 [2024-11-18 15:09:56.467814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.319 [2024-11-18 15:09:56.467836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.467849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.467871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.467883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.467905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.320 [2024-11-18 15:09:56.467917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.467939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.467952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.467974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.320 [2024-11-18 15:09:56.467986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.320 [2024-11-18 15:09:56.468813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.320 [2024-11-18 15:09:56.468848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.320 [2024-11-18 15:09:56.468882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.468963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.468991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.320 [2024-11-18 15:09:56.469004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.320 [2024-11-18 15:09:56.469026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.320 [2024-11-18 15:09:56.469038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:09:56.469060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.321 [2024-11-18 15:09:56.469080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.321 [2024-11-18 15:10:03.555259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.555960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.555979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.321 [2024-11-18 15:10:03.556171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.321 [2024-11-18 15:10:03.556285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.321 [2024-11-18 15:10:03.556459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:14.321 [2024-11-18 15:10:03.556538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.321 [2024-11-18 15:10:03.556555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.556578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.556596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.556619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.556637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.556660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.556677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.556715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.556749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.556772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.556788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.556810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.556826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.556848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.556865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.557083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.557390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.557434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.557776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.557881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.557967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.557992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.558009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.558062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.558109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.322 [2024-11-18 15:10:03.558151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.558193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.558235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.558277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.558319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.322 [2024-11-18 15:10:03.558416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:14.322 [2024-11-18 15:10:03.558444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.323 [2024-11-18 15:10:03.558462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.558963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.558980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.323 [2024-11-18 15:10:03.559282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.323 [2024-11-18 15:10:03.559400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.323 [2024-11-18 15:10:03.559660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:14.323 [2024-11-18 15:10:03.559702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.323 [2024-11-18 15:10:03.559719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.559744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.559761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.559786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.559812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.559839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.559857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.559882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.559899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.559924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.559941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.559967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.559984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.560026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.560069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.560111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.560153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.560195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.560238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.560281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.560614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.560701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.560765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.560811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.560859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.560905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.560952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.560982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.560998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.561028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.324 [2024-11-18 15:10:03.561045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.561075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.561092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.561160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-18 15:10:03.561182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:14.324 [2024-11-18 15:10:03.561213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:03.561263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:03.561330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:03.561400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:03.561469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:03.561518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:03.561573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:03.561621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:03.561682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:03.561700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.325 [2024-11-18 15:10:16.878865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-18 15:10:16.878878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.878894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.878908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.878923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.878938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.878953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.878967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.878982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.878996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.326 [2024-11-18 15:10:16.879776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.326 [2024-11-18 15:10:16.879883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.326 [2024-11-18 15:10:16.879900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.327 [2024-11-18 15:10:16.879914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.879930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.879944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.879961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.327 [2024-11-18 15:10:16.880271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.327 [2024-11-18 15:10:16.880378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.327 [2024-11-18 15:10:16.880439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.327 [2024-11-18 15:10:16.880469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.327 [2024-11-18 15:10:16.880755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.327 [2024-11-18 15:10:16.880768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.880784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.880797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.880812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.880826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.880841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.880855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.880878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.880894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.880910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.880925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.880940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.880954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.880969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.880983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.880998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.328 [2024-11-18 15:10:16.881098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.328 [2024-11-18 15:10:16.881140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.328 [2024-11-18 15:10:16.881246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.328 [2024-11-18 15:10:16.881328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.328 [2024-11-18 15:10:16.881391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.328 [2024-11-18 15:10:16.881543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.328 [2024-11-18 15:10:16.881583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.328 [2024-11-18 15:10:16.881672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.328 [2024-11-18 15:10:16.881729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.328 [2024-11-18 15:10:16.881760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.881777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.881792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.881808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.881828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.881845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.881860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.881876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.881890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.881906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.881924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.881941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.881970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.881995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.329 [2024-11-18 15:10:16.882009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.329 [2024-11-18 15:10:16.882074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.329 [2024-11-18 15:10:16.882104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.329 [2024-11-18 15:10:16.882202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.329 [2024-11-18 15:10:16.882432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4c060 is same with the state(5) to be set 00:24:14.329 [2024-11-18 15:10:16.882466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.329 [2024-11-18 15:10:16.882477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.329 [2024-11-18 15:10:16.882489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57896 len:8 PRP1 0x0 PRP2 0x0 00:24:14.329 [2024-11-18 15:10:16.882503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.329 [2024-11-18 15:10:16.882561] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe4c060 was disconnected and freed. reset controller. 00:24:14.329 [2024-11-18 15:10:16.883735] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.329 [2024-11-18 15:10:16.883832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5da00 (9): Bad file descriptor 00:24:14.329 [2024-11-18 15:10:16.883966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.329 [2024-11-18 15:10:16.884029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.329 [2024-11-18 15:10:16.884067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5da00 with addr=10.0.0.2, port=4421 00:24:14.329 [2024-11-18 15:10:16.884087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5da00 is same with the state(5) to be set 00:24:14.329 [2024-11-18 15:10:16.884115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5da00 (9): Bad file descriptor 00:24:14.329 [2024-11-18 15:10:16.884142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:14.329 [2024-11-18 15:10:16.884158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:14.329 [2024-11-18 15:10:16.884174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:14.329 [2024-11-18 15:10:16.884200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.329 [2024-11-18 15:10:16.884217] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.329 [2024-11-18 15:10:26.931112] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:14.329 Received shutdown signal, test time was about 55.082901 seconds 00:24:14.329 00:24:14.329 Latency(us) 00:24:14.329 [2024-11-18T15:10:37.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.329 [2024-11-18T15:10:37.919Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:14.329 Verification LBA range: start 0x0 length 0x4000 00:24:14.329 Nvme0n1 : 55.08 12033.73 47.01 0.00 0.00 10621.15 364.92 7046430.72 00:24:14.330 [2024-11-18T15:10:37.920Z] =================================================================================================================== 00:24:14.330 [2024-11-18T15:10:37.920Z] Total : 12033.73 47.01 0.00 0.00 10621.15 364.92 7046430.72 00:24:14.330 15:10:37 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.330 15:10:37 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:14.330 15:10:37 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:14.330 15:10:37 -- host/multipath.sh@125 -- # nvmftestfini 00:24:14.330 15:10:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:14.330 15:10:37 -- nvmf/common.sh@116 -- # sync 00:24:14.330 15:10:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:14.330 15:10:37 -- nvmf/common.sh@119 -- # set +e 00:24:14.330 15:10:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:14.330 15:10:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:14.330 rmmod nvme_tcp 00:24:14.330 rmmod nvme_fabrics 00:24:14.330 rmmod nvme_keyring 00:24:14.330 15:10:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:14.330 15:10:37 -- nvmf/common.sh@123 -- # set -e 00:24:14.330 15:10:37 -- nvmf/common.sh@124 -- # return 0 00:24:14.330 15:10:37 -- nvmf/common.sh@477 -- # '[' -n 99115 ']' 00:24:14.330 15:10:37 -- nvmf/common.sh@478 -- # killprocess 99115 00:24:14.330 15:10:37 -- common/autotest_common.sh@936 -- # '[' -z 99115 ']' 00:24:14.330 15:10:37 -- common/autotest_common.sh@940 -- # kill -0 99115 00:24:14.330 15:10:37 -- common/autotest_common.sh@941 -- # uname 00:24:14.330 15:10:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:14.330 15:10:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99115 00:24:14.330 15:10:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:14.330 15:10:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:14.330 15:10:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99115' 00:24:14.330 killing process with pid 99115 00:24:14.330 15:10:37 -- common/autotest_common.sh@955 -- # kill 99115 00:24:14.330 15:10:37 -- common/autotest_common.sh@960 -- # wait 99115 00:24:14.330 15:10:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:14.330 15:10:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:14.330 15:10:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:14.330 15:10:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.330 15:10:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:14.330 15:10:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.330 15:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.330 15:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.330 15:10:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:14.330 00:24:14.330 real 1m0.776s 00:24:14.330 user 2m49.922s 00:24:14.330 sys 0m14.505s 00:24:14.330 15:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:14.330 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:24:14.330 ************************************ 00:24:14.330 END TEST nvmf_multipath 00:24:14.330 ************************************ 00:24:14.330 15:10:37 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:14.330 15:10:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:14.330 15:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:14.330 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:24:14.330 ************************************ 00:24:14.330 START TEST nvmf_timeout 00:24:14.330 ************************************ 00:24:14.330 15:10:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:14.330 * Looking for test storage... 00:24:14.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:14.330 15:10:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:14.330 15:10:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:14.330 15:10:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:14.590 15:10:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:14.590 15:10:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:14.590 15:10:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:14.590 15:10:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:14.590 15:10:37 -- scripts/common.sh@335 -- # IFS=.-: 00:24:14.590 15:10:37 -- scripts/common.sh@335 -- # read -ra ver1 00:24:14.590 15:10:37 -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.590 15:10:37 -- scripts/common.sh@336 -- # read -ra ver2 00:24:14.590 15:10:37 -- scripts/common.sh@337 -- # local 'op=<' 00:24:14.590 15:10:37 -- scripts/common.sh@339 -- # ver1_l=2 00:24:14.590 15:10:37 -- scripts/common.sh@340 -- # ver2_l=1 00:24:14.590 15:10:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:14.590 15:10:37 -- scripts/common.sh@343 -- # case "$op" in 00:24:14.590 15:10:37 -- scripts/common.sh@344 -- # : 1 00:24:14.590 15:10:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:14.590 15:10:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.590 15:10:37 -- scripts/common.sh@364 -- # decimal 1 00:24:14.590 15:10:37 -- scripts/common.sh@352 -- # local d=1 00:24:14.590 15:10:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.590 15:10:37 -- scripts/common.sh@354 -- # echo 1 00:24:14.590 15:10:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:14.590 15:10:37 -- scripts/common.sh@365 -- # decimal 2 00:24:14.590 15:10:37 -- scripts/common.sh@352 -- # local d=2 00:24:14.590 15:10:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.590 15:10:37 -- scripts/common.sh@354 -- # echo 2 00:24:14.590 15:10:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:14.590 15:10:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:14.590 15:10:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:14.590 15:10:37 -- scripts/common.sh@367 -- # return 0 00:24:14.590 15:10:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.590 15:10:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.590 --rc genhtml_branch_coverage=1 00:24:14.590 --rc genhtml_function_coverage=1 00:24:14.590 --rc genhtml_legend=1 00:24:14.590 --rc geninfo_all_blocks=1 00:24:14.590 --rc geninfo_unexecuted_blocks=1 00:24:14.590 00:24:14.590 ' 00:24:14.590 15:10:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.590 --rc genhtml_branch_coverage=1 00:24:14.590 --rc genhtml_function_coverage=1 00:24:14.590 --rc genhtml_legend=1 00:24:14.590 --rc geninfo_all_blocks=1 00:24:14.590 --rc geninfo_unexecuted_blocks=1 00:24:14.590 00:24:14.590 ' 00:24:14.590 15:10:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.590 --rc genhtml_branch_coverage=1 00:24:14.590 --rc genhtml_function_coverage=1 00:24:14.590 --rc genhtml_legend=1 00:24:14.590 --rc geninfo_all_blocks=1 00:24:14.590 --rc geninfo_unexecuted_blocks=1 00:24:14.590 00:24:14.590 ' 00:24:14.590 15:10:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.590 --rc genhtml_branch_coverage=1 00:24:14.590 --rc genhtml_function_coverage=1 00:24:14.590 --rc genhtml_legend=1 00:24:14.590 --rc geninfo_all_blocks=1 00:24:14.590 --rc geninfo_unexecuted_blocks=1 00:24:14.590 00:24:14.590 ' 00:24:14.590 15:10:37 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:14.590 15:10:37 -- nvmf/common.sh@7 -- # uname -s 00:24:14.590 15:10:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.590 15:10:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.590 15:10:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.590 15:10:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.590 15:10:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.590 15:10:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.590 15:10:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.590 15:10:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.590 15:10:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.590 15:10:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.590 15:10:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:24:14.590 15:10:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:24:14.590 15:10:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.590 15:10:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.590 15:10:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:14.590 15:10:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:14.590 15:10:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.590 15:10:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.590 15:10:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.590 15:10:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.590 15:10:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.591 15:10:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.591 15:10:37 -- paths/export.sh@5 -- # export PATH 00:24:14.591 15:10:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.591 15:10:37 -- nvmf/common.sh@46 -- # : 0 00:24:14.591 15:10:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:14.591 15:10:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:14.591 15:10:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:14.591 15:10:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.591 15:10:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.591 15:10:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:14.591 15:10:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:14.591 15:10:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:14.591 15:10:37 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.591 15:10:37 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.591 15:10:37 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:14.591 15:10:37 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:14.591 15:10:37 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.591 15:10:37 -- host/timeout.sh@19 -- # nvmftestinit 00:24:14.591 15:10:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:14.591 15:10:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.591 15:10:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:14.591 15:10:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:14.591 15:10:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:14.591 15:10:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.591 15:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.591 15:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.591 15:10:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:14.591 15:10:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:14.591 15:10:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:14.591 15:10:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:14.591 15:10:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:14.591 15:10:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:14.591 15:10:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.591 15:10:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.591 15:10:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:14.591 15:10:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:14.591 15:10:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:14.591 15:10:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:14.591 15:10:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:14.591 15:10:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.591 15:10:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:14.591 15:10:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:14.591 15:10:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:14.591 15:10:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:14.591 15:10:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:14.591 15:10:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:14.591 Cannot find device "nvmf_tgt_br" 00:24:14.591 15:10:38 -- nvmf/common.sh@154 -- # true 00:24:14.591 15:10:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:14.591 Cannot find device "nvmf_tgt_br2" 00:24:14.591 15:10:38 -- nvmf/common.sh@155 -- # true 00:24:14.591 15:10:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:14.591 15:10:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:14.591 Cannot find device "nvmf_tgt_br" 00:24:14.591 15:10:38 -- nvmf/common.sh@157 -- # true 00:24:14.591 15:10:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:14.591 Cannot find device "nvmf_tgt_br2" 00:24:14.591 15:10:38 -- nvmf/common.sh@158 -- # true 00:24:14.591 15:10:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:14.591 15:10:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:14.591 15:10:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:14.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:14.591 15:10:38 -- nvmf/common.sh@161 -- # true 00:24:14.591 15:10:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:14.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:14.591 15:10:38 -- nvmf/common.sh@162 -- # true 00:24:14.591 15:10:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:14.591 15:10:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:14.591 15:10:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:14.591 15:10:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:14.591 15:10:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:14.591 15:10:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:14.850 15:10:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:14.850 15:10:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:14.850 15:10:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:14.850 15:10:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:14.850 15:10:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:14.850 15:10:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:14.850 15:10:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:14.850 15:10:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:14.850 15:10:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:14.850 15:10:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:14.850 15:10:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:14.850 15:10:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:14.850 15:10:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:14.850 15:10:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:14.850 15:10:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:14.850 15:10:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:14.850 15:10:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:14.850 15:10:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:14.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:24:14.850 00:24:14.850 --- 10.0.0.2 ping statistics --- 00:24:14.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.850 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:14.850 15:10:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:14.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:14.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:14.851 00:24:14.851 --- 10.0.0.3 ping statistics --- 00:24:14.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.851 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:14.851 15:10:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:14.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:14.851 00:24:14.851 --- 10.0.0.1 ping statistics --- 00:24:14.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.851 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:14.851 15:10:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.851 15:10:38 -- nvmf/common.sh@421 -- # return 0 00:24:14.851 15:10:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:14.851 15:10:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.851 15:10:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:14.851 15:10:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:14.851 15:10:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.851 15:10:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:14.851 15:10:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:14.851 15:10:38 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:14.851 15:10:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:14.851 15:10:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:14.851 15:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.851 15:10:38 -- nvmf/common.sh@469 -- # nvmfpid=100476 00:24:14.851 15:10:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:14.851 15:10:38 -- nvmf/common.sh@470 -- # waitforlisten 100476 00:24:14.851 15:10:38 -- common/autotest_common.sh@829 -- # '[' -z 100476 ']' 00:24:14.851 15:10:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.851 15:10:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.851 15:10:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.851 15:10:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.851 15:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.851 [2024-11-18 15:10:38.407365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:14.851 [2024-11-18 15:10:38.407444] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.110 [2024-11-18 15:10:38.549488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:15.110 [2024-11-18 15:10:38.607794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:15.110 [2024-11-18 15:10:38.607927] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.110 [2024-11-18 15:10:38.607939] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.110 [2024-11-18 15:10:38.607947] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.110 [2024-11-18 15:10:38.608124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.110 [2024-11-18 15:10:38.608135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.044 15:10:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.044 15:10:39 -- common/autotest_common.sh@862 -- # return 0 00:24:16.044 15:10:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:16.044 15:10:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.045 15:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.045 15:10:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.045 15:10:39 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.045 15:10:39 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:16.303 [2024-11-18 15:10:39.658815] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.303 15:10:39 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:16.303 Malloc0 00:24:16.562 15:10:39 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.562 15:10:40 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.130 15:10:40 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.130 [2024-11-18 15:10:40.592714] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.130 15:10:40 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:17.130 15:10:40 -- host/timeout.sh@32 -- # bdevperf_pid=100567 00:24:17.130 15:10:40 -- host/timeout.sh@34 -- # waitforlisten 100567 /var/tmp/bdevperf.sock 00:24:17.130 15:10:40 -- common/autotest_common.sh@829 -- # '[' -z 100567 ']' 00:24:17.130 15:10:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.130 15:10:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.130 15:10:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.130 15:10:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.130 15:10:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.130 [2024-11-18 15:10:40.663684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:17.130 [2024-11-18 15:10:40.663773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100567 ] 00:24:17.389 [2024-11-18 15:10:40.807019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.389 [2024-11-18 15:10:40.880423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.326 15:10:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.326 15:10:41 -- common/autotest_common.sh@862 -- # return 0 00:24:18.326 15:10:41 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:18.326 15:10:41 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:18.584 NVMe0n1 00:24:18.584 15:10:42 -- host/timeout.sh@51 -- # rpc_pid=100615 00:24:18.584 15:10:42 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:18.584 15:10:42 -- host/timeout.sh@53 -- # sleep 1 00:24:18.843 Running I/O for 10 seconds... 00:24:19.782 15:10:43 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.782 [2024-11-18 15:10:43.261339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632490 is same with the state(5) to be set 00:24:19.782 [2024-11-18 15:10:43.261709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.782 [2024-11-18 15:10:43.261738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.782 [2024-11-18 15:10:43.261750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.782 [2024-11-18 15:10:43.261759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.782 [2024-11-18 15:10:43.261769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.782 [2024-11-18 15:10:43.261777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.782 [2024-11-18 15:10:43.261786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.782 [2024-11-18 15:10:43.261794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.261803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6768c0 is same with the state(5) to be set 00:24:19.783 [2024-11-18 15:10:43.261875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.261888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.261907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.261917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.261927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.261936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.261946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.261954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.261964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.261974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.261984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.261992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.783 [2024-11-18 15:10:43.262106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.783 [2024-11-18 15:10:43.262142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.783 [2024-11-18 15:10:43.262160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.783 [2024-11-18 15:10:43.262178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.783 [2024-11-18 15:10:43.262251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.783 [2024-11-18 15:10:43.262406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.783 [2024-11-18 15:10:43.262445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.783 [2024-11-18 15:10:43.262642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.783 [2024-11-18 15:10:43.262652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.262967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.262986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.262996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.784 [2024-11-18 15:10:43.263436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.784 [2024-11-18 15:10:43.263446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.784 [2024-11-18 15:10:43.263455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.263891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.263972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.263981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.264018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.264112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.264148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.785 [2024-11-18 15:10:43.264166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.785 [2024-11-18 15:10:43.264221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.785 [2024-11-18 15:10:43.264231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.786 [2024-11-18 15:10:43.264245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.786 [2024-11-18 15:10:43.264269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.786 [2024-11-18 15:10:43.264288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.786 [2024-11-18 15:10:43.264306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.786 [2024-11-18 15:10:43.264357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.786 [2024-11-18 15:10:43.264378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.786 [2024-11-18 15:10:43.264398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.786 [2024-11-18 15:10:43.264417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.786 [2024-11-18 15:10:43.264436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.786 [2024-11-18 15:10:43.264456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb780 is same with the state(5) to be set 00:24:19.786 [2024-11-18 15:10:43.264477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.786 [2024-11-18 15:10:43.264484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.786 [2024-11-18 15:10:43.264492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126840 len:8 PRP1 0x0 PRP2 0x0 00:24:19.786 [2024-11-18 15:10:43.264502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.786 [2024-11-18 15:10:43.264563] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6fb780 was disconnected and freed. reset controller. 00:24:19.786 [2024-11-18 15:10:43.264811] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.786 [2024-11-18 15:10:43.264840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6768c0 (9): Bad file descriptor 00:24:19.786 [2024-11-18 15:10:43.264943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-11-18 15:10:43.264990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.786 [2024-11-18 15:10:43.265006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6768c0 with addr=10.0.0.2, port=4420 00:24:19.786 [2024-11-18 15:10:43.265016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6768c0 is same with the state(5) to be set 00:24:19.786 [2024-11-18 15:10:43.265033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6768c0 (9): Bad file descriptor 00:24:19.786 [2024-11-18 15:10:43.265055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.786 [2024-11-18 15:10:43.265070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.786 [2024-11-18 15:10:43.265081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.786 [2024-11-18 15:10:43.278590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.786 [2024-11-18 15:10:43.278623] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.786 15:10:43 -- host/timeout.sh@56 -- # sleep 2 00:24:22.320 [2024-11-18 15:10:45.278707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.320 [2024-11-18 15:10:45.278787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:22.320 [2024-11-18 15:10:45.278805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6768c0 with addr=10.0.0.2, port=4420 00:24:22.320 [2024-11-18 15:10:45.278815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6768c0 is same with the state(5) to be set 00:24:22.320 [2024-11-18 15:10:45.278834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6768c0 (9): Bad file descriptor 00:24:22.320 [2024-11-18 15:10:45.278850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:22.320 [2024-11-18 15:10:45.278858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:22.320 [2024-11-18 15:10:45.278867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:22.320 [2024-11-18 15:10:45.278886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.320 [2024-11-18 15:10:45.278896] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:22.320 15:10:45 -- host/timeout.sh@57 -- # get_controller 00:24:22.320 15:10:45 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:22.320 15:10:45 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:22.320 15:10:45 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:22.320 15:10:45 -- host/timeout.sh@58 -- # get_bdev 00:24:22.320 15:10:45 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:22.320 15:10:45 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:22.320 15:10:45 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:22.320 15:10:45 -- host/timeout.sh@61 -- # sleep 5 00:24:23.697 [2024-11-18 15:10:47.279017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.697 [2024-11-18 15:10:47.279118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.697 [2024-11-18 15:10:47.279141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6768c0 with addr=10.0.0.2, port=4420 00:24:23.697 [2024-11-18 15:10:47.279156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6768c0 is same with the state(5) to be set 00:24:23.697 [2024-11-18 15:10:47.279183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6768c0 (9): Bad file descriptor 00:24:23.697 [2024-11-18 15:10:47.279204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.697 [2024-11-18 15:10:47.279215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.697 [2024-11-18 15:10:47.279226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.697 [2024-11-18 15:10:47.279253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.697 [2024-11-18 15:10:47.279266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.230 [2024-11-18 15:10:49.279294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.230 [2024-11-18 15:10:49.279331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.230 [2024-11-18 15:10:49.279353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.230 [2024-11-18 15:10:49.279361] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:26.230 [2024-11-18 15:10:49.279381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.797 00:24:26.797 Latency(us) 00:24:26.797 [2024-11-18T15:10:50.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.797 [2024-11-18T15:10:50.387Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:26.797 Verification LBA range: start 0x0 length 0x4000 00:24:26.797 NVMe0n1 : 8.08 1957.13 7.65 15.84 0.00 64797.55 2353.34 7015926.69 00:24:26.797 [2024-11-18T15:10:50.387Z] =================================================================================================================== 00:24:26.797 [2024-11-18T15:10:50.387Z] Total : 1957.13 7.65 15.84 0.00 64797.55 2353.34 7015926.69 00:24:26.797 0 00:24:27.366 15:10:50 -- host/timeout.sh@62 -- # get_controller 00:24:27.366 15:10:50 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.366 15:10:50 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:27.624 15:10:51 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:27.624 15:10:51 -- host/timeout.sh@63 -- # get_bdev 00:24:27.624 15:10:51 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:27.624 15:10:51 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:27.883 15:10:51 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:27.883 15:10:51 -- host/timeout.sh@65 -- # wait 100615 00:24:27.883 15:10:51 -- host/timeout.sh@67 -- # killprocess 100567 00:24:27.883 15:10:51 -- common/autotest_common.sh@936 -- # '[' -z 100567 ']' 00:24:27.883 15:10:51 -- common/autotest_common.sh@940 -- # kill -0 100567 00:24:27.883 15:10:51 -- common/autotest_common.sh@941 -- # uname 00:24:27.883 15:10:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:27.883 15:10:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100567 00:24:27.883 killing process with pid 100567 00:24:27.883 Received shutdown signal, test time was about 9.095154 seconds 00:24:27.883 00:24:27.883 Latency(us) 00:24:27.883 [2024-11-18T15:10:51.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.883 [2024-11-18T15:10:51.473Z] =================================================================================================================== 00:24:27.883 [2024-11-18T15:10:51.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.883 15:10:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:27.883 15:10:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:27.883 15:10:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100567' 00:24:27.883 15:10:51 -- common/autotest_common.sh@955 -- # kill 100567 00:24:27.883 15:10:51 -- common/autotest_common.sh@960 -- # wait 100567 00:24:28.142 15:10:51 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.402 [2024-11-18 15:10:51.779580] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.402 15:10:51 -- host/timeout.sh@74 -- # bdevperf_pid=100768 00:24:28.402 15:10:51 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:28.402 15:10:51 -- host/timeout.sh@76 -- # waitforlisten 100768 /var/tmp/bdevperf.sock 00:24:28.402 15:10:51 -- common/autotest_common.sh@829 -- # '[' -z 100768 ']' 00:24:28.402 15:10:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.402 15:10:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.402 15:10:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.402 15:10:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.402 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:24:28.402 [2024-11-18 15:10:51.838879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:28.402 [2024-11-18 15:10:51.838954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100768 ] 00:24:28.402 [2024-11-18 15:10:51.974914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.661 [2024-11-18 15:10:52.044505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.238 15:10:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.238 15:10:52 -- common/autotest_common.sh@862 -- # return 0 00:24:29.238 15:10:52 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:29.501 15:10:52 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:29.759 NVMe0n1 00:24:29.759 15:10:53 -- host/timeout.sh@84 -- # rpc_pid=100820 00:24:29.759 15:10:53 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.759 15:10:53 -- host/timeout.sh@86 -- # sleep 1 00:24:30.019 Running I/O for 10 seconds... 00:24:30.959 15:10:54 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.959 [2024-11-18 15:10:54.475273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.475876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.475975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.476984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.477945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.478923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.959 [2024-11-18 15:10:54.479002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.479601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7ca0 is same with the state(5) to be set 00:24:30.960 [2024-11-18 15:10:54.480121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.960 [2024-11-18 15:10:54.480773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.960 [2024-11-18 15:10:54.480790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.960 [2024-11-18 15:10:54.480861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.960 [2024-11-18 15:10:54.480871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.960 [2024-11-18 15:10:54.480879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.480889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.480896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.480906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.480914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.480924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.480931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.480956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.480965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.480974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.480998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.961 [2024-11-18 15:10:54.481619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.961 [2024-11-18 15:10:54.481834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.961 [2024-11-18 15:10:54.481842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.481852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.481886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.481897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.481905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.481915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.481923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.481932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.481940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.481950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.481958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.481968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.481977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.481986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.481994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.962 [2024-11-18 15:10:54.482521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.962 [2024-11-18 15:10:54.482600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.962 [2024-11-18 15:10:54.482608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.963 [2024-11-18 15:10:54.482625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.963 [2024-11-18 15:10:54.482653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.963 [2024-11-18 15:10:54.482671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.963 [2024-11-18 15:10:54.482689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.963 [2024-11-18 15:10:54.482707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.963 [2024-11-18 15:10:54.482725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.963 [2024-11-18 15:10:54.482743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.963 [2024-11-18 15:10:54.482760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.963 [2024-11-18 15:10:54.482783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.963 [2024-11-18 15:10:54.482802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.963 [2024-11-18 15:10:54.482819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.963 [2024-11-18 15:10:54.482837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225f660 is same with the state(5) to be set 00:24:30.963 [2024-11-18 15:10:54.482857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.963 [2024-11-18 15:10:54.482864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.963 [2024-11-18 15:10:54.482871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80 len:8 PRP1 0x0 PRP2 0x0 00:24:30.963 [2024-11-18 15:10:54.482879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.963 [2024-11-18 15:10:54.482928] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x225f660 was disconnected and freed. reset controller. 00:24:30.963 [2024-11-18 15:10:54.483127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.963 [2024-11-18 15:10:54.483204] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da8c0 (9): Bad file descriptor 00:24:30.963 [2024-11-18 15:10:54.489124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da8c0 (9): Bad file descriptor 00:24:30.963 [2024-11-18 15:10:54.489223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:30.963 [2024-11-18 15:10:54.489234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:30.963 [2024-11-18 15:10:54.489244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.963 [2024-11-18 15:10:54.489263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.963 [2024-11-18 15:10:54.489274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.963 15:10:54 -- host/timeout.sh@90 -- # sleep 1 00:24:31.902 [2024-11-18 15:10:55.489351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.161 [2024-11-18 15:10:55.489952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.161 [2024-11-18 15:10:55.490085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21da8c0 with addr=10.0.0.2, port=4420 00:24:32.161 [2024-11-18 15:10:55.490220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21da8c0 is same with the state(5) to be set 00:24:32.161 [2024-11-18 15:10:55.490312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da8c0 (9): Bad file descriptor 00:24:32.161 [2024-11-18 15:10:55.490440] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.161 [2024-11-18 15:10:55.490534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.161 [2024-11-18 15:10:55.490589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.161 [2024-11-18 15:10:55.490703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.161 [2024-11-18 15:10:55.490806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.161 15:10:55 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.420 [2024-11-18 15:10:55.751071] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.420 15:10:55 -- host/timeout.sh@92 -- # wait 100820 00:24:32.987 [2024-11-18 15:10:56.510653] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.107 00:24:41.107 Latency(us) 00:24:41.107 [2024-11-18T15:11:04.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.107 [2024-11-18T15:11:04.697Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:41.107 Verification LBA range: start 0x0 length 0x4000 00:24:41.107 NVMe0n1 : 10.01 10696.82 41.78 0.00 0.00 11944.30 1027.72 3019898.88 00:24:41.107 [2024-11-18T15:11:04.697Z] =================================================================================================================== 00:24:41.107 [2024-11-18T15:11:04.697Z] Total : 10696.82 41.78 0.00 0.00 11944.30 1027.72 3019898.88 00:24:41.107 0 00:24:41.107 15:11:03 -- host/timeout.sh@97 -- # rpc_pid=100938 00:24:41.107 15:11:03 -- host/timeout.sh@98 -- # sleep 1 00:24:41.107 15:11:03 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.107 Running I/O for 10 seconds... 00:24:41.107 15:11:04 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.107 [2024-11-18 15:11:04.612389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.107 [2024-11-18 15:11:04.612462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.107 [2024-11-18 15:11:04.612472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.612961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633110 is same with the state(5) to be set 00:24:41.108 [2024-11-18 15:11:04.615155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-11-18 15:11:04.615231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.108 [2024-11-18 15:11:04.615255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-11-18 15:11:04.615264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.108 [2024-11-18 15:11:04.615274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-11-18 15:11:04.615283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.108 [2024-11-18 15:11:04.615292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-11-18 15:11:04.615299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.108 [2024-11-18 15:11:04.615311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-11-18 15:11:04.615340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.108 [2024-11-18 15:11:04.615350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-11-18 15:11:04.615358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.108 [2024-11-18 15:11:04.615366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.615768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.615785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.615817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.615849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.615869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.615919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.615934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.615983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.615992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-11-18 15:11:04.615999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.616007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-11-18 15:11:04.616013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.109 [2024-11-18 15:11:04.616022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-11-18 15:11:04.616609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.110 [2024-11-18 15:11:04.616700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.110 [2024-11-18 15:11:04.616718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.616857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.616879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.616929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.616945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.616962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.616978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.616987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.616995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-11-18 15:11:04.617379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-11-18 15:11:04.617459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-11-18 15:11:04.617467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.112 [2024-11-18 15:11:04.617476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-11-18 15:11:04.617497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.112 [2024-11-18 15:11:04.617525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.112 [2024-11-18 15:11:04.617534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.112 [2024-11-18 15:11:04.617541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3664 len:8 PRP1 0x0 PRP2 0x0 00:24:41.112 [2024-11-18 15:11:04.617549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.112 [2024-11-18 15:11:04.617625] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x222b380 was disconnected and freed. reset controller. 00:24:41.112 [2024-11-18 15:11:04.617693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.112 [2024-11-18 15:11:04.617708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.112 [2024-11-18 15:11:04.617717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.112 [2024-11-18 15:11:04.617724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.112 [2024-11-18 15:11:04.617733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.112 [2024-11-18 15:11:04.617741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.112 [2024-11-18 15:11:04.617750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.112 [2024-11-18 15:11:04.617757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.112 [2024-11-18 15:11:04.617764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21da8c0 is same with the state(5) to be set 00:24:41.112 [2024-11-18 15:11:04.617935] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.112 [2024-11-18 15:11:04.617963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da8c0 (9): Bad file descriptor 00:24:41.112 [2024-11-18 15:11:04.618066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.112 [2024-11-18 15:11:04.618114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.112 [2024-11-18 15:11:04.618128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21da8c0 with addr=10.0.0.2, port=4420 00:24:41.112 [2024-11-18 15:11:04.618137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21da8c0 is same with the state(5) to be set 00:24:41.112 [2024-11-18 15:11:04.618154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da8c0 (9): Bad file descriptor 00:24:41.112 [2024-11-18 15:11:04.618168] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.112 [2024-11-18 15:11:04.618183] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.112 [2024-11-18 15:11:04.618193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.112 [2024-11-18 15:11:04.618211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.112 [2024-11-18 15:11:04.618221] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.112 15:11:04 -- host/timeout.sh@101 -- # sleep 3 00:24:42.049 [2024-11-18 15:11:05.618288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.049 [2024-11-18 15:11:05.618369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.049 [2024-11-18 15:11:05.618386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21da8c0 with addr=10.0.0.2, port=4420 00:24:42.049 [2024-11-18 15:11:05.618395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21da8c0 is same with the state(5) to be set 00:24:42.049 [2024-11-18 15:11:05.618411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da8c0 (9): Bad file descriptor 00:24:42.049 [2024-11-18 15:11:05.618426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.049 [2024-11-18 15:11:05.618435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.049 [2024-11-18 15:11:05.618444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.049 [2024-11-18 15:11:05.618461] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.049 [2024-11-18 15:11:05.618470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.432 [2024-11-18 15:11:06.618531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.432 [2024-11-18 15:11:06.618608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.432 [2024-11-18 15:11:06.618624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21da8c0 with addr=10.0.0.2, port=4420 00:24:43.432 [2024-11-18 15:11:06.618633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21da8c0 is same with the state(5) to be set 00:24:43.432 [2024-11-18 15:11:06.618649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da8c0 (9): Bad file descriptor 00:24:43.432 [2024-11-18 15:11:06.618663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.432 [2024-11-18 15:11:06.618671] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.432 [2024-11-18 15:11:06.618679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.432 [2024-11-18 15:11:06.618695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.432 [2024-11-18 15:11:06.618706] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.050 [2024-11-18 15:11:07.620372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.050 [2024-11-18 15:11:07.620438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.050 [2024-11-18 15:11:07.620453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21da8c0 with addr=10.0.0.2, port=4420 00:24:44.050 [2024-11-18 15:11:07.620463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21da8c0 is same with the state(5) to be set 00:24:44.050 [2024-11-18 15:11:07.620559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da8c0 (9): Bad file descriptor 00:24:44.050 [2024-11-18 15:11:07.620671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.050 [2024-11-18 15:11:07.620682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.050 [2024-11-18 15:11:07.620690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.050 [2024-11-18 15:11:07.622620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.050 [2024-11-18 15:11:07.622657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.050 15:11:07 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.307 [2024-11-18 15:11:07.879790] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.565 15:11:07 -- host/timeout.sh@103 -- # wait 100938 00:24:45.130 [2024-11-18 15:11:08.638965] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.394 00:24:50.394 Latency(us) 00:24:50.394 [2024-11-18T15:11:13.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.394 [2024-11-18T15:11:13.984Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:50.394 Verification LBA range: start 0x0 length 0x4000 00:24:50.394 NVMe0n1 : 10.01 9268.48 36.21 7234.36 0.00 7743.82 644.19 3019898.88 00:24:50.394 [2024-11-18T15:11:13.984Z] =================================================================================================================== 00:24:50.394 [2024-11-18T15:11:13.984Z] Total : 9268.48 36.21 7234.36 0.00 7743.82 0.00 3019898.88 00:24:50.394 0 00:24:50.394 15:11:13 -- host/timeout.sh@105 -- # killprocess 100768 00:24:50.394 15:11:13 -- common/autotest_common.sh@936 -- # '[' -z 100768 ']' 00:24:50.394 15:11:13 -- common/autotest_common.sh@940 -- # kill -0 100768 00:24:50.394 15:11:13 -- common/autotest_common.sh@941 -- # uname 00:24:50.394 15:11:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:50.394 15:11:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100768 00:24:50.394 15:11:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:50.394 15:11:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:50.394 15:11:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100768' 00:24:50.394 killing process with pid 100768 00:24:50.394 Received shutdown signal, test time was about 10.000000 seconds 00:24:50.394 00:24:50.394 Latency(us) 00:24:50.394 [2024-11-18T15:11:13.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.394 [2024-11-18T15:11:13.984Z] =================================================================================================================== 00:24:50.394 [2024-11-18T15:11:13.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.394 15:11:13 -- common/autotest_common.sh@955 -- # kill 100768 00:24:50.394 15:11:13 -- common/autotest_common.sh@960 -- # wait 100768 00:24:50.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.394 15:11:13 -- host/timeout.sh@110 -- # bdevperf_pid=101063 00:24:50.394 15:11:13 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:50.394 15:11:13 -- host/timeout.sh@112 -- # waitforlisten 101063 /var/tmp/bdevperf.sock 00:24:50.394 15:11:13 -- common/autotest_common.sh@829 -- # '[' -z 101063 ']' 00:24:50.394 15:11:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.394 15:11:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.394 15:11:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.394 15:11:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.394 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:24:50.394 [2024-11-18 15:11:13.879184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:50.394 [2024-11-18 15:11:13.880070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101063 ] 00:24:50.653 [2024-11-18 15:11:14.017913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.653 [2024-11-18 15:11:14.081510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.221 15:11:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.221 15:11:14 -- common/autotest_common.sh@862 -- # return 0 00:24:51.221 15:11:14 -- host/timeout.sh@116 -- # dtrace_pid=101087 00:24:51.221 15:11:14 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 101063 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:51.221 15:11:14 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:51.788 15:11:15 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:51.788 NVMe0n1 00:24:51.788 15:11:15 -- host/timeout.sh@124 -- # rpc_pid=101140 00:24:51.788 15:11:15 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.788 15:11:15 -- host/timeout.sh@125 -- # sleep 1 00:24:52.046 Running I/O for 10 seconds... 00:24:52.982 15:11:16 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.244 [2024-11-18 15:11:16.624266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.624994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.625001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.625009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.625017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.625024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.625032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.625040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.625048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.244 [2024-11-18 15:11:16.625055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636ba0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.245 [2024-11-18 15:11:16.625354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.245 [2024-11-18 15:11:16.625376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.245 [2024-11-18 15:11:16.625392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.245 [2024-11-18 15:11:16.625412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9b8c0 is same with the state(5) to be set 00:24:53.245 [2024-11-18 15:11:16.625491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.625986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.625994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.626004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.626012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.626021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.626028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.626038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.245 [2024-11-18 15:11:16.626045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.245 [2024-11-18 15:11:16.626054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.246 [2024-11-18 15:11:16.626720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.246 [2024-11-18 15:11:16.626727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.626988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.626995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.247 [2024-11-18 15:11:16.627412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.247 [2024-11-18 15:11:16.627421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.248 [2024-11-18 15:11:16.627703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1020780 is same with the state(5) to be set 00:24:53.248 [2024-11-18 15:11:16.627720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:53.248 [2024-11-18 15:11:16.627740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:53.248 [2024-11-18 15:11:16.627747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96816 len:8 PRP1 0x0 PRP2 0x0 00:24:53.248 [2024-11-18 15:11:16.627754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.248 [2024-11-18 15:11:16.627801] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1020780 was disconnected and freed. reset controller. 00:24:53.248 [2024-11-18 15:11:16.628032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.248 [2024-11-18 15:11:16.628076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b8c0 (9): Bad file descriptor 00:24:53.248 [2024-11-18 15:11:16.628168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.248 [2024-11-18 15:11:16.628212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.248 [2024-11-18 15:11:16.628226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9b8c0 with addr=10.0.0.2, port=4420 00:24:53.248 [2024-11-18 15:11:16.628235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9b8c0 is same with the state(5) to be set 00:24:53.248 [2024-11-18 15:11:16.628251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b8c0 (9): Bad file descriptor 00:24:53.248 [2024-11-18 15:11:16.628265] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.248 [2024-11-18 15:11:16.628273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.248 [2024-11-18 15:11:16.628282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.248 [2024-11-18 15:11:16.644150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.248 [2024-11-18 15:11:16.644189] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.248 15:11:16 -- host/timeout.sh@128 -- # wait 101140 00:24:55.151 [2024-11-18 15:11:18.644270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.151 [2024-11-18 15:11:18.644373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.151 [2024-11-18 15:11:18.644390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9b8c0 with addr=10.0.0.2, port=4420 00:24:55.151 [2024-11-18 15:11:18.644401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9b8c0 is same with the state(5) to be set 00:24:55.151 [2024-11-18 15:11:18.644419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b8c0 (9): Bad file descriptor 00:24:55.151 [2024-11-18 15:11:18.644433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.151 [2024-11-18 15:11:18.644442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.151 [2024-11-18 15:11:18.644449] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.151 [2024-11-18 15:11:18.644466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.152 [2024-11-18 15:11:18.644476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.684 [2024-11-18 15:11:20.644549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-11-18 15:11:20.644624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.684 [2024-11-18 15:11:20.644641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9b8c0 with addr=10.0.0.2, port=4420 00:24:57.684 [2024-11-18 15:11:20.644652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9b8c0 is same with the state(5) to be set 00:24:57.684 [2024-11-18 15:11:20.644669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b8c0 (9): Bad file descriptor 00:24:57.684 [2024-11-18 15:11:20.644685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.684 [2024-11-18 15:11:20.644694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.684 [2024-11-18 15:11:20.644703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.684 [2024-11-18 15:11:20.644721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.684 [2024-11-18 15:11:20.644731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.063 [2024-11-18 15:11:22.644761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.063 [2024-11-18 15:11:22.644802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.063 [2024-11-18 15:11:22.644824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.063 [2024-11-18 15:11:22.644832] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:59.063 [2024-11-18 15:11:22.644848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.441 00:25:00.441 Latency(us) 00:25:00.441 [2024-11-18T15:11:24.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.441 [2024-11-18T15:11:24.031Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:00.441 NVMe0n1 : 8.17 3161.20 12.35 15.67 0.00 40240.21 1876.71 7015926.69 00:25:00.441 [2024-11-18T15:11:24.031Z] =================================================================================================================== 00:25:00.441 [2024-11-18T15:11:24.031Z] Total : 3161.20 12.35 15.67 0.00 40240.21 1876.71 7015926.69 00:25:00.441 0 00:25:00.441 15:11:23 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:00.441 Attaching 5 probes... 00:25:00.441 1297.308888: reset bdev controller NVMe0 00:25:00.441 1297.403961: reconnect bdev controller NVMe0 00:25:00.441 3313.497150: reconnect delay bdev controller NVMe0 00:25:00.441 3313.509307: reconnect bdev controller NVMe0 00:25:00.441 5313.777562: reconnect delay bdev controller NVMe0 00:25:00.441 5313.789348: reconnect bdev controller NVMe0 00:25:00.441 7314.033893: reconnect delay bdev controller NVMe0 00:25:00.441 7314.045554: reconnect bdev controller NVMe0 00:25:00.441 15:11:23 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:00.441 15:11:23 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:00.441 15:11:23 -- host/timeout.sh@136 -- # kill 101087 00:25:00.441 15:11:23 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:00.441 15:11:23 -- host/timeout.sh@139 -- # killprocess 101063 00:25:00.441 15:11:23 -- common/autotest_common.sh@936 -- # '[' -z 101063 ']' 00:25:00.441 15:11:23 -- common/autotest_common.sh@940 -- # kill -0 101063 00:25:00.441 15:11:23 -- common/autotest_common.sh@941 -- # uname 00:25:00.441 15:11:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.441 15:11:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101063 00:25:00.441 killing process with pid 101063 00:25:00.441 Received shutdown signal, test time was about 8.239078 seconds 00:25:00.441 00:25:00.441 Latency(us) 00:25:00.441 [2024-11-18T15:11:24.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.441 [2024-11-18T15:11:24.031Z] =================================================================================================================== 00:25:00.441 [2024-11-18T15:11:24.031Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.441 15:11:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:00.441 15:11:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:00.441 15:11:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101063' 00:25:00.441 15:11:23 -- common/autotest_common.sh@955 -- # kill 101063 00:25:00.441 15:11:23 -- common/autotest_common.sh@960 -- # wait 101063 00:25:00.441 15:11:23 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.700 15:11:24 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:00.700 15:11:24 -- host/timeout.sh@145 -- # nvmftestfini 00:25:00.700 15:11:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:00.700 15:11:24 -- nvmf/common.sh@116 -- # sync 00:25:00.700 15:11:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:00.700 15:11:24 -- nvmf/common.sh@119 -- # set +e 00:25:00.700 15:11:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:00.700 15:11:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:00.700 rmmod nvme_tcp 00:25:00.958 rmmod nvme_fabrics 00:25:00.958 rmmod nvme_keyring 00:25:00.958 15:11:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:00.958 15:11:24 -- nvmf/common.sh@123 -- # set -e 00:25:00.958 15:11:24 -- nvmf/common.sh@124 -- # return 0 00:25:00.958 15:11:24 -- nvmf/common.sh@477 -- # '[' -n 100476 ']' 00:25:00.958 15:11:24 -- nvmf/common.sh@478 -- # killprocess 100476 00:25:00.958 15:11:24 -- common/autotest_common.sh@936 -- # '[' -z 100476 ']' 00:25:00.958 15:11:24 -- common/autotest_common.sh@940 -- # kill -0 100476 00:25:00.958 15:11:24 -- common/autotest_common.sh@941 -- # uname 00:25:00.958 15:11:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.958 15:11:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100476 00:25:00.958 killing process with pid 100476 00:25:00.958 15:11:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:00.958 15:11:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:00.958 15:11:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100476' 00:25:00.958 15:11:24 -- common/autotest_common.sh@955 -- # kill 100476 00:25:00.958 15:11:24 -- common/autotest_common.sh@960 -- # wait 100476 00:25:01.217 15:11:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:01.217 15:11:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:01.217 15:11:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:01.217 15:11:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.217 15:11:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:01.217 15:11:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.217 15:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.217 15:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.217 15:11:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:01.217 00:25:01.217 real 0m46.852s 00:25:01.217 user 2m16.941s 00:25:01.217 sys 0m5.137s 00:25:01.217 15:11:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:01.217 15:11:24 -- common/autotest_common.sh@10 -- # set +x 00:25:01.217 ************************************ 00:25:01.217 END TEST nvmf_timeout 00:25:01.217 ************************************ 00:25:01.217 15:11:24 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:01.217 15:11:24 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:01.217 15:11:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:01.217 15:11:24 -- common/autotest_common.sh@10 -- # set +x 00:25:01.217 15:11:24 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:01.217 00:25:01.217 real 17m27.887s 00:25:01.217 user 55m36.299s 00:25:01.217 sys 3m38.953s 00:25:01.217 15:11:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:01.217 ************************************ 00:25:01.217 END TEST nvmf_tcp 00:25:01.217 15:11:24 -- common/autotest_common.sh@10 -- # set +x 00:25:01.217 ************************************ 00:25:01.217 15:11:24 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:01.217 15:11:24 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:01.217 15:11:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:01.217 15:11:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:01.217 15:11:24 -- common/autotest_common.sh@10 -- # set +x 00:25:01.217 ************************************ 00:25:01.217 START TEST spdkcli_nvmf_tcp 00:25:01.217 ************************************ 00:25:01.217 15:11:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:01.476 * Looking for test storage... 00:25:01.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:01.476 15:11:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:01.476 15:11:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:01.476 15:11:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:01.477 15:11:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:01.477 15:11:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:01.477 15:11:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:01.477 15:11:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:01.477 15:11:24 -- scripts/common.sh@335 -- # IFS=.-: 00:25:01.477 15:11:24 -- scripts/common.sh@335 -- # read -ra ver1 00:25:01.477 15:11:24 -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.477 15:11:24 -- scripts/common.sh@336 -- # read -ra ver2 00:25:01.477 15:11:24 -- scripts/common.sh@337 -- # local 'op=<' 00:25:01.477 15:11:24 -- scripts/common.sh@339 -- # ver1_l=2 00:25:01.477 15:11:24 -- scripts/common.sh@340 -- # ver2_l=1 00:25:01.477 15:11:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:01.477 15:11:24 -- scripts/common.sh@343 -- # case "$op" in 00:25:01.477 15:11:24 -- scripts/common.sh@344 -- # : 1 00:25:01.477 15:11:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:01.477 15:11:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.477 15:11:24 -- scripts/common.sh@364 -- # decimal 1 00:25:01.477 15:11:24 -- scripts/common.sh@352 -- # local d=1 00:25:01.477 15:11:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.477 15:11:24 -- scripts/common.sh@354 -- # echo 1 00:25:01.477 15:11:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:01.477 15:11:24 -- scripts/common.sh@365 -- # decimal 2 00:25:01.477 15:11:24 -- scripts/common.sh@352 -- # local d=2 00:25:01.477 15:11:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.477 15:11:24 -- scripts/common.sh@354 -- # echo 2 00:25:01.477 15:11:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:01.477 15:11:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:01.477 15:11:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:01.477 15:11:24 -- scripts/common.sh@367 -- # return 0 00:25:01.477 15:11:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.477 15:11:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:01.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.477 --rc genhtml_branch_coverage=1 00:25:01.477 --rc genhtml_function_coverage=1 00:25:01.477 --rc genhtml_legend=1 00:25:01.477 --rc geninfo_all_blocks=1 00:25:01.477 --rc geninfo_unexecuted_blocks=1 00:25:01.477 00:25:01.477 ' 00:25:01.477 15:11:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:01.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.477 --rc genhtml_branch_coverage=1 00:25:01.477 --rc genhtml_function_coverage=1 00:25:01.477 --rc genhtml_legend=1 00:25:01.477 --rc geninfo_all_blocks=1 00:25:01.477 --rc geninfo_unexecuted_blocks=1 00:25:01.477 00:25:01.477 ' 00:25:01.477 15:11:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:01.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.477 --rc genhtml_branch_coverage=1 00:25:01.477 --rc genhtml_function_coverage=1 00:25:01.477 --rc genhtml_legend=1 00:25:01.477 --rc geninfo_all_blocks=1 00:25:01.477 --rc geninfo_unexecuted_blocks=1 00:25:01.477 00:25:01.477 ' 00:25:01.477 15:11:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:01.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.477 --rc genhtml_branch_coverage=1 00:25:01.477 --rc genhtml_function_coverage=1 00:25:01.477 --rc genhtml_legend=1 00:25:01.477 --rc geninfo_all_blocks=1 00:25:01.477 --rc geninfo_unexecuted_blocks=1 00:25:01.477 00:25:01.477 ' 00:25:01.477 15:11:24 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:01.477 15:11:24 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:01.477 15:11:24 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:01.477 15:11:24 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:01.477 15:11:24 -- nvmf/common.sh@7 -- # uname -s 00:25:01.477 15:11:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.477 15:11:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.477 15:11:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.477 15:11:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.477 15:11:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.477 15:11:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.477 15:11:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.477 15:11:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.477 15:11:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.477 15:11:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.477 15:11:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:25:01.477 15:11:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:25:01.477 15:11:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.477 15:11:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.477 15:11:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:01.477 15:11:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:01.477 15:11:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.477 15:11:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.477 15:11:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.477 15:11:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.477 15:11:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.477 15:11:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.477 15:11:24 -- paths/export.sh@5 -- # export PATH 00:25:01.477 15:11:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.477 15:11:24 -- nvmf/common.sh@46 -- # : 0 00:25:01.477 15:11:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:01.477 15:11:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:01.477 15:11:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:01.477 15:11:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.477 15:11:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.477 15:11:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:01.477 15:11:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:01.477 15:11:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:01.477 15:11:24 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:01.477 15:11:24 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:01.477 15:11:24 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:01.477 15:11:24 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:01.477 15:11:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.477 15:11:24 -- common/autotest_common.sh@10 -- # set +x 00:25:01.477 15:11:24 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:01.477 15:11:24 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101369 00:25:01.477 15:11:24 -- spdkcli/common.sh@34 -- # waitforlisten 101369 00:25:01.477 15:11:24 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:01.477 15:11:24 -- common/autotest_common.sh@829 -- # '[' -z 101369 ']' 00:25:01.477 15:11:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.477 15:11:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.477 15:11:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.477 15:11:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.477 15:11:24 -- common/autotest_common.sh@10 -- # set +x 00:25:01.477 [2024-11-18 15:11:25.029212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:01.477 [2024-11-18 15:11:25.029337] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101369 ] 00:25:01.737 [2024-11-18 15:11:25.168732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:01.737 [2024-11-18 15:11:25.227775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:01.737 [2024-11-18 15:11:25.228681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.737 [2024-11-18 15:11:25.228723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.673 15:11:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.673 15:11:26 -- common/autotest_common.sh@862 -- # return 0 00:25:02.673 15:11:26 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:02.673 15:11:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.673 15:11:26 -- common/autotest_common.sh@10 -- # set +x 00:25:02.673 15:11:26 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:02.673 15:11:26 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:02.673 15:11:26 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:02.674 15:11:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.674 15:11:26 -- common/autotest_common.sh@10 -- # set +x 00:25:02.674 15:11:26 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:02.674 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:02.674 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:02.674 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:02.674 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:02.674 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:02.674 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:02.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:02.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:02.674 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:02.674 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:02.674 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:02.674 ' 00:25:02.933 [2024-11-18 15:11:26.493040] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:05.464 [2024-11-18 15:11:28.723169] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.841 [2024-11-18 15:11:30.012844] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:09.374 [2024-11-18 15:11:32.407812] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:11.279 [2024-11-18 15:11:34.474367] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:12.656 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:12.656 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:12.656 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:12.656 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:12.656 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:12.656 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:12.656 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:12.656 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:12.656 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:12.656 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:12.656 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:12.656 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:12.656 15:11:36 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:12.656 15:11:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:12.656 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:25:12.656 15:11:36 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:12.656 15:11:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:12.656 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:25:12.656 15:11:36 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:12.656 15:11:36 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:13.224 15:11:36 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:13.224 15:11:36 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:13.224 15:11:36 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:13.224 15:11:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.224 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.224 15:11:36 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:13.224 15:11:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.224 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:25:13.224 15:11:36 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:13.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:13.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:13.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:13.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:13.224 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:13.224 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:13.224 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:13.224 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:13.224 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:13.224 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:13.224 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:13.224 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:13.224 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:13.224 ' 00:25:19.792 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:19.792 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:19.792 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:19.792 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:19.792 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:19.792 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:19.792 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:19.792 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:19.792 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:19.792 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:19.792 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:19.792 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:19.792 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:19.792 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:19.792 15:11:42 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:19.792 15:11:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.792 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:25:19.792 15:11:42 -- spdkcli/nvmf.sh@90 -- # killprocess 101369 00:25:19.792 15:11:42 -- common/autotest_common.sh@936 -- # '[' -z 101369 ']' 00:25:19.792 15:11:42 -- common/autotest_common.sh@940 -- # kill -0 101369 00:25:19.792 15:11:42 -- common/autotest_common.sh@941 -- # uname 00:25:19.792 15:11:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.792 15:11:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101369 00:25:19.792 killing process with pid 101369 00:25:19.792 15:11:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:19.792 15:11:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:19.792 15:11:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101369' 00:25:19.792 15:11:42 -- common/autotest_common.sh@955 -- # kill 101369 00:25:19.792 [2024-11-18 15:11:42.376933] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:19.792 15:11:42 -- common/autotest_common.sh@960 -- # wait 101369 00:25:19.792 15:11:42 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:19.792 15:11:42 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:19.792 15:11:42 -- spdkcli/common.sh@13 -- # '[' -n 101369 ']' 00:25:19.792 15:11:42 -- spdkcli/common.sh@14 -- # killprocess 101369 00:25:19.792 15:11:42 -- common/autotest_common.sh@936 -- # '[' -z 101369 ']' 00:25:19.792 Process with pid 101369 is not found 00:25:19.792 15:11:42 -- common/autotest_common.sh@940 -- # kill -0 101369 00:25:19.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101369) - No such process 00:25:19.792 15:11:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101369 is not found' 00:25:19.792 15:11:42 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:19.792 15:11:42 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:19.792 15:11:42 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:19.792 00:25:19.792 real 0m17.876s 00:25:19.792 user 0m38.695s 00:25:19.792 sys 0m0.961s 00:25:19.792 15:11:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:19.792 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:25:19.792 ************************************ 00:25:19.792 END TEST spdkcli_nvmf_tcp 00:25:19.792 ************************************ 00:25:19.792 15:11:42 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:19.792 15:11:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:19.792 15:11:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:19.792 15:11:42 -- common/autotest_common.sh@10 -- # set +x 00:25:19.792 ************************************ 00:25:19.792 START TEST nvmf_identify_passthru 00:25:19.792 ************************************ 00:25:19.792 15:11:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:19.792 * Looking for test storage... 00:25:19.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:19.792 15:11:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:19.792 15:11:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:19.792 15:11:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:19.792 15:11:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:19.792 15:11:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:19.792 15:11:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:19.792 15:11:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:19.792 15:11:42 -- scripts/common.sh@335 -- # IFS=.-: 00:25:19.792 15:11:42 -- scripts/common.sh@335 -- # read -ra ver1 00:25:19.792 15:11:42 -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.792 15:11:42 -- scripts/common.sh@336 -- # read -ra ver2 00:25:19.792 15:11:42 -- scripts/common.sh@337 -- # local 'op=<' 00:25:19.792 15:11:42 -- scripts/common.sh@339 -- # ver1_l=2 00:25:19.792 15:11:42 -- scripts/common.sh@340 -- # ver2_l=1 00:25:19.792 15:11:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:19.792 15:11:42 -- scripts/common.sh@343 -- # case "$op" in 00:25:19.792 15:11:42 -- scripts/common.sh@344 -- # : 1 00:25:19.792 15:11:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:19.792 15:11:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.792 15:11:42 -- scripts/common.sh@364 -- # decimal 1 00:25:19.792 15:11:42 -- scripts/common.sh@352 -- # local d=1 00:25:19.792 15:11:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.792 15:11:42 -- scripts/common.sh@354 -- # echo 1 00:25:19.792 15:11:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:19.792 15:11:42 -- scripts/common.sh@365 -- # decimal 2 00:25:19.792 15:11:42 -- scripts/common.sh@352 -- # local d=2 00:25:19.792 15:11:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.792 15:11:42 -- scripts/common.sh@354 -- # echo 2 00:25:19.792 15:11:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:19.792 15:11:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:19.792 15:11:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:19.792 15:11:42 -- scripts/common.sh@367 -- # return 0 00:25:19.792 15:11:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.792 15:11:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:19.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.792 --rc genhtml_branch_coverage=1 00:25:19.792 --rc genhtml_function_coverage=1 00:25:19.792 --rc genhtml_legend=1 00:25:19.792 --rc geninfo_all_blocks=1 00:25:19.792 --rc geninfo_unexecuted_blocks=1 00:25:19.792 00:25:19.792 ' 00:25:19.792 15:11:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:19.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.792 --rc genhtml_branch_coverage=1 00:25:19.792 --rc genhtml_function_coverage=1 00:25:19.792 --rc genhtml_legend=1 00:25:19.792 --rc geninfo_all_blocks=1 00:25:19.792 --rc geninfo_unexecuted_blocks=1 00:25:19.792 00:25:19.792 ' 00:25:19.792 15:11:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:19.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.792 --rc genhtml_branch_coverage=1 00:25:19.792 --rc genhtml_function_coverage=1 00:25:19.792 --rc genhtml_legend=1 00:25:19.792 --rc geninfo_all_blocks=1 00:25:19.792 --rc geninfo_unexecuted_blocks=1 00:25:19.792 00:25:19.792 ' 00:25:19.792 15:11:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:19.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.792 --rc genhtml_branch_coverage=1 00:25:19.792 --rc genhtml_function_coverage=1 00:25:19.792 --rc genhtml_legend=1 00:25:19.792 --rc geninfo_all_blocks=1 00:25:19.792 --rc geninfo_unexecuted_blocks=1 00:25:19.792 00:25:19.792 ' 00:25:19.793 15:11:42 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:19.793 15:11:42 -- nvmf/common.sh@7 -- # uname -s 00:25:19.793 15:11:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.793 15:11:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.793 15:11:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.793 15:11:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.793 15:11:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.793 15:11:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.793 15:11:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.793 15:11:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.793 15:11:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.793 15:11:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.793 15:11:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:25:19.793 15:11:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:25:19.793 15:11:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.793 15:11:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.793 15:11:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:19.793 15:11:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.793 15:11:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.793 15:11:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.793 15:11:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.793 15:11:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.793 15:11:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.793 15:11:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.793 15:11:42 -- paths/export.sh@5 -- # export PATH 00:25:19.793 15:11:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.793 15:11:42 -- nvmf/common.sh@46 -- # : 0 00:25:19.793 15:11:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:19.793 15:11:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:19.793 15:11:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:19.793 15:11:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.793 15:11:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.793 15:11:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:19.793 15:11:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:19.793 15:11:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:19.793 15:11:42 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.793 15:11:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.793 15:11:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.793 15:11:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.793 15:11:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.793 15:11:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.793 15:11:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.793 15:11:42 -- paths/export.sh@5 -- # export PATH 00:25:19.793 15:11:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.793 15:11:42 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:19.793 15:11:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:19.793 15:11:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.793 15:11:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:19.793 15:11:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:19.793 15:11:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:19.793 15:11:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.793 15:11:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:19.793 15:11:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.793 15:11:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:19.793 15:11:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:19.793 15:11:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:19.793 15:11:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:19.793 15:11:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:19.793 15:11:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:19.793 15:11:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.793 15:11:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.793 15:11:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:19.793 15:11:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:19.793 15:11:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:19.793 15:11:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:19.793 15:11:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:19.793 15:11:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.793 15:11:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:19.793 15:11:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:19.793 15:11:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:19.793 15:11:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:19.793 15:11:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:19.793 15:11:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:19.793 Cannot find device "nvmf_tgt_br" 00:25:19.793 15:11:42 -- nvmf/common.sh@154 -- # true 00:25:19.793 15:11:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:19.793 Cannot find device "nvmf_tgt_br2" 00:25:19.793 15:11:42 -- nvmf/common.sh@155 -- # true 00:25:19.793 15:11:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:19.793 15:11:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:19.793 Cannot find device "nvmf_tgt_br" 00:25:19.793 15:11:42 -- nvmf/common.sh@157 -- # true 00:25:19.793 15:11:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:19.793 Cannot find device "nvmf_tgt_br2" 00:25:19.793 15:11:42 -- nvmf/common.sh@158 -- # true 00:25:19.793 15:11:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:19.793 15:11:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:19.793 15:11:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:19.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.793 15:11:42 -- nvmf/common.sh@161 -- # true 00:25:19.793 15:11:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:19.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.793 15:11:42 -- nvmf/common.sh@162 -- # true 00:25:19.793 15:11:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:19.793 15:11:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:19.793 15:11:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:19.793 15:11:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:19.793 15:11:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:19.793 15:11:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:19.793 15:11:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:19.793 15:11:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:19.793 15:11:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:19.793 15:11:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:19.793 15:11:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:19.793 15:11:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:19.793 15:11:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:19.793 15:11:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:19.793 15:11:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:19.793 15:11:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:19.793 15:11:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:19.793 15:11:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:19.793 15:11:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:19.793 15:11:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:19.793 15:11:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:19.793 15:11:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:19.793 15:11:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:19.793 15:11:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:19.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:25:19.793 00:25:19.794 --- 10.0.0.2 ping statistics --- 00:25:19.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.794 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:19.794 15:11:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:19.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:19.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:25:19.794 00:25:19.794 --- 10.0.0.3 ping statistics --- 00:25:19.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.794 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:19.794 15:11:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:19.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:19.794 00:25:19.794 --- 10.0.0.1 ping statistics --- 00:25:19.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.794 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:19.794 15:11:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.794 15:11:43 -- nvmf/common.sh@421 -- # return 0 00:25:19.794 15:11:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:19.794 15:11:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.794 15:11:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:19.794 15:11:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:19.794 15:11:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.794 15:11:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:19.794 15:11:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:19.794 15:11:43 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:19.794 15:11:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.794 15:11:43 -- common/autotest_common.sh@10 -- # set +x 00:25:19.794 15:11:43 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:19.794 15:11:43 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:19.794 15:11:43 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:19.794 15:11:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:19.794 15:11:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:19.794 15:11:43 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:19.794 15:11:43 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:19.794 15:11:43 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:19.794 15:11:43 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:19.794 15:11:43 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:19.794 15:11:43 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:19.794 15:11:43 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:19.794 15:11:43 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:19.794 15:11:43 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:19.794 15:11:43 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:19.794 15:11:43 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:19.794 15:11:43 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:19.794 15:11:43 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:20.053 15:11:43 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:20.053 15:11:43 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:20.053 15:11:43 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:20.053 15:11:43 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:20.053 15:11:43 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:20.053 15:11:43 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:20.053 15:11:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.053 15:11:43 -- common/autotest_common.sh@10 -- # set +x 00:25:20.312 15:11:43 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:20.313 15:11:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.313 15:11:43 -- common/autotest_common.sh@10 -- # set +x 00:25:20.313 15:11:43 -- target/identify_passthru.sh@31 -- # nvmfpid=101875 00:25:20.313 15:11:43 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:20.313 15:11:43 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.313 15:11:43 -- target/identify_passthru.sh@35 -- # waitforlisten 101875 00:25:20.313 15:11:43 -- common/autotest_common.sh@829 -- # '[' -z 101875 ']' 00:25:20.313 15:11:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.313 15:11:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.313 15:11:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.313 15:11:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.313 15:11:43 -- common/autotest_common.sh@10 -- # set +x 00:25:20.313 [2024-11-18 15:11:43.746130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:20.313 [2024-11-18 15:11:43.746249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.313 [2024-11-18 15:11:43.887915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.572 [2024-11-18 15:11:43.957249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:20.572 [2024-11-18 15:11:43.957582] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.572 [2024-11-18 15:11:43.957751] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.572 [2024-11-18 15:11:43.957864] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.572 [2024-11-18 15:11:43.958242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.572 [2024-11-18 15:11:43.958388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.572 [2024-11-18 15:11:43.958287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.572 [2024-11-18 15:11:43.958396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.572 15:11:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.572 15:11:44 -- common/autotest_common.sh@862 -- # return 0 00:25:20.572 15:11:44 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:20.572 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.572 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.572 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.572 15:11:44 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:20.572 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.572 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.572 [2024-11-18 15:11:44.106237] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:20.572 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.572 15:11:44 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:20.572 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.572 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.572 [2024-11-18 15:11:44.120406] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.572 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.572 15:11:44 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:20.572 15:11:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.572 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 15:11:44 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:20.832 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.832 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 Nvme0n1 00:25:20.832 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.832 15:11:44 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:20.832 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.832 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.832 15:11:44 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:20.832 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.832 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.832 15:11:44 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.832 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.832 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 [2024-11-18 15:11:44.252028] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.832 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.832 15:11:44 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:20.832 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.832 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:20.832 [2024-11-18 15:11:44.259819] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:20.832 [ 00:25:20.832 { 00:25:20.832 "allow_any_host": true, 00:25:20.832 "hosts": [], 00:25:20.832 "listen_addresses": [], 00:25:20.832 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:20.832 "subtype": "Discovery" 00:25:20.832 }, 00:25:20.832 { 00:25:20.832 "allow_any_host": true, 00:25:20.832 "hosts": [], 00:25:20.832 "listen_addresses": [ 00:25:20.832 { 00:25:20.832 "adrfam": "IPv4", 00:25:20.832 "traddr": "10.0.0.2", 00:25:20.832 "transport": "TCP", 00:25:20.832 "trsvcid": "4420", 00:25:20.832 "trtype": "TCP" 00:25:20.832 } 00:25:20.832 ], 00:25:20.832 "max_cntlid": 65519, 00:25:20.832 "max_namespaces": 1, 00:25:20.832 "min_cntlid": 1, 00:25:20.832 "model_number": "SPDK bdev Controller", 00:25:20.832 "namespaces": [ 00:25:20.832 { 00:25:20.832 "bdev_name": "Nvme0n1", 00:25:20.832 "name": "Nvme0n1", 00:25:20.832 "nguid": "54FD056ADFE14E7F89299FA53A5184C7", 00:25:20.832 "nsid": 1, 00:25:20.832 "uuid": "54fd056a-dfe1-4e7f-8929-9fa53a5184c7" 00:25:20.832 } 00:25:20.832 ], 00:25:20.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.832 "serial_number": "SPDK00000000000001", 00:25:20.832 "subtype": "NVMe" 00:25:20.832 } 00:25:20.832 ] 00:25:20.832 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.832 15:11:44 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:20.832 15:11:44 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:20.832 15:11:44 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:21.091 15:11:44 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:21.091 15:11:44 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:21.091 15:11:44 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:21.091 15:11:44 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:21.350 15:11:44 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:21.350 15:11:44 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:21.350 15:11:44 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:21.350 15:11:44 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.350 15:11:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.350 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:25:21.350 15:11:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.350 15:11:44 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:21.350 15:11:44 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:21.350 15:11:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:21.350 15:11:44 -- nvmf/common.sh@116 -- # sync 00:25:21.350 15:11:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:21.350 15:11:44 -- nvmf/common.sh@119 -- # set +e 00:25:21.350 15:11:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:21.350 15:11:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:21.350 rmmod nvme_tcp 00:25:21.350 rmmod nvme_fabrics 00:25:21.350 rmmod nvme_keyring 00:25:21.350 15:11:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:21.350 15:11:44 -- nvmf/common.sh@123 -- # set -e 00:25:21.350 15:11:44 -- nvmf/common.sh@124 -- # return 0 00:25:21.350 15:11:44 -- nvmf/common.sh@477 -- # '[' -n 101875 ']' 00:25:21.350 15:11:44 -- nvmf/common.sh@478 -- # killprocess 101875 00:25:21.350 15:11:44 -- common/autotest_common.sh@936 -- # '[' -z 101875 ']' 00:25:21.350 15:11:44 -- common/autotest_common.sh@940 -- # kill -0 101875 00:25:21.350 15:11:44 -- common/autotest_common.sh@941 -- # uname 00:25:21.350 15:11:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:21.350 15:11:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101875 00:25:21.350 killing process with pid 101875 00:25:21.350 15:11:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:21.350 15:11:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:21.350 15:11:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101875' 00:25:21.350 15:11:44 -- common/autotest_common.sh@955 -- # kill 101875 00:25:21.350 [2024-11-18 15:11:44.868999] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:21.350 15:11:44 -- common/autotest_common.sh@960 -- # wait 101875 00:25:21.609 15:11:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:21.609 15:11:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:21.609 15:11:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:21.609 15:11:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.609 15:11:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:21.609 15:11:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.610 15:11:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:21.610 15:11:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.610 15:11:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:21.610 ************************************ 00:25:21.610 END TEST nvmf_identify_passthru 00:25:21.610 ************************************ 00:25:21.610 00:25:21.610 real 0m2.452s 00:25:21.610 user 0m4.769s 00:25:21.610 sys 0m0.812s 00:25:21.610 15:11:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:21.610 15:11:45 -- common/autotest_common.sh@10 -- # set +x 00:25:21.610 15:11:45 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:21.610 15:11:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:21.610 15:11:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.610 15:11:45 -- common/autotest_common.sh@10 -- # set +x 00:25:21.610 ************************************ 00:25:21.610 START TEST nvmf_dif 00:25:21.610 ************************************ 00:25:21.610 15:11:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:21.869 * Looking for test storage... 00:25:21.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:21.869 15:11:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:21.869 15:11:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:21.869 15:11:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:21.869 15:11:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:21.869 15:11:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:21.869 15:11:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:21.869 15:11:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:21.869 15:11:45 -- scripts/common.sh@335 -- # IFS=.-: 00:25:21.869 15:11:45 -- scripts/common.sh@335 -- # read -ra ver1 00:25:21.869 15:11:45 -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.869 15:11:45 -- scripts/common.sh@336 -- # read -ra ver2 00:25:21.869 15:11:45 -- scripts/common.sh@337 -- # local 'op=<' 00:25:21.869 15:11:45 -- scripts/common.sh@339 -- # ver1_l=2 00:25:21.869 15:11:45 -- scripts/common.sh@340 -- # ver2_l=1 00:25:21.869 15:11:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:21.869 15:11:45 -- scripts/common.sh@343 -- # case "$op" in 00:25:21.869 15:11:45 -- scripts/common.sh@344 -- # : 1 00:25:21.869 15:11:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:21.869 15:11:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.869 15:11:45 -- scripts/common.sh@364 -- # decimal 1 00:25:21.869 15:11:45 -- scripts/common.sh@352 -- # local d=1 00:25:21.869 15:11:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.869 15:11:45 -- scripts/common.sh@354 -- # echo 1 00:25:21.869 15:11:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:21.869 15:11:45 -- scripts/common.sh@365 -- # decimal 2 00:25:21.869 15:11:45 -- scripts/common.sh@352 -- # local d=2 00:25:21.869 15:11:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.869 15:11:45 -- scripts/common.sh@354 -- # echo 2 00:25:21.869 15:11:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:21.869 15:11:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:21.869 15:11:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:21.869 15:11:45 -- scripts/common.sh@367 -- # return 0 00:25:21.869 15:11:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.869 15:11:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:21.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.869 --rc genhtml_branch_coverage=1 00:25:21.869 --rc genhtml_function_coverage=1 00:25:21.869 --rc genhtml_legend=1 00:25:21.869 --rc geninfo_all_blocks=1 00:25:21.869 --rc geninfo_unexecuted_blocks=1 00:25:21.869 00:25:21.869 ' 00:25:21.869 15:11:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:21.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.869 --rc genhtml_branch_coverage=1 00:25:21.869 --rc genhtml_function_coverage=1 00:25:21.869 --rc genhtml_legend=1 00:25:21.869 --rc geninfo_all_blocks=1 00:25:21.869 --rc geninfo_unexecuted_blocks=1 00:25:21.869 00:25:21.869 ' 00:25:21.869 15:11:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:21.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.870 --rc genhtml_branch_coverage=1 00:25:21.870 --rc genhtml_function_coverage=1 00:25:21.870 --rc genhtml_legend=1 00:25:21.870 --rc geninfo_all_blocks=1 00:25:21.870 --rc geninfo_unexecuted_blocks=1 00:25:21.870 00:25:21.870 ' 00:25:21.870 15:11:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:21.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.870 --rc genhtml_branch_coverage=1 00:25:21.870 --rc genhtml_function_coverage=1 00:25:21.870 --rc genhtml_legend=1 00:25:21.870 --rc geninfo_all_blocks=1 00:25:21.870 --rc geninfo_unexecuted_blocks=1 00:25:21.870 00:25:21.870 ' 00:25:21.870 15:11:45 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:21.870 15:11:45 -- nvmf/common.sh@7 -- # uname -s 00:25:21.870 15:11:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.870 15:11:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.870 15:11:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.870 15:11:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.870 15:11:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.870 15:11:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.870 15:11:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.870 15:11:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.870 15:11:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.870 15:11:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.870 15:11:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:25:21.870 15:11:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:25:21.870 15:11:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.870 15:11:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.870 15:11:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:21.870 15:11:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:21.870 15:11:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.870 15:11:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.870 15:11:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.870 15:11:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.870 15:11:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.870 15:11:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.870 15:11:45 -- paths/export.sh@5 -- # export PATH 00:25:21.870 15:11:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.870 15:11:45 -- nvmf/common.sh@46 -- # : 0 00:25:21.870 15:11:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:21.870 15:11:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:21.870 15:11:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:21.870 15:11:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.870 15:11:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.870 15:11:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:21.870 15:11:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:21.870 15:11:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:21.870 15:11:45 -- target/dif.sh@15 -- # NULL_META=16 00:25:21.870 15:11:45 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:21.870 15:11:45 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:21.870 15:11:45 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:21.870 15:11:45 -- target/dif.sh@135 -- # nvmftestinit 00:25:21.870 15:11:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:21.870 15:11:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.870 15:11:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:21.870 15:11:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:21.870 15:11:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:21.870 15:11:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.870 15:11:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:21.870 15:11:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.870 15:11:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:21.870 15:11:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:21.870 15:11:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:21.870 15:11:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:21.870 15:11:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:21.870 15:11:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:21.870 15:11:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.870 15:11:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.870 15:11:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:21.870 15:11:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:21.870 15:11:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:21.870 15:11:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:21.870 15:11:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:21.870 15:11:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.870 15:11:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:21.870 15:11:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:21.870 15:11:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:21.870 15:11:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:21.870 15:11:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:21.870 15:11:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:21.870 Cannot find device "nvmf_tgt_br" 00:25:21.870 15:11:45 -- nvmf/common.sh@154 -- # true 00:25:21.870 15:11:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:21.870 Cannot find device "nvmf_tgt_br2" 00:25:21.870 15:11:45 -- nvmf/common.sh@155 -- # true 00:25:21.870 15:11:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:21.870 15:11:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:21.870 Cannot find device "nvmf_tgt_br" 00:25:21.870 15:11:45 -- nvmf/common.sh@157 -- # true 00:25:21.870 15:11:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:22.130 Cannot find device "nvmf_tgt_br2" 00:25:22.130 15:11:45 -- nvmf/common.sh@158 -- # true 00:25:22.130 15:11:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:22.130 15:11:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:22.130 15:11:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:22.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:22.130 15:11:45 -- nvmf/common.sh@161 -- # true 00:25:22.130 15:11:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:22.130 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:22.130 15:11:45 -- nvmf/common.sh@162 -- # true 00:25:22.130 15:11:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:22.130 15:11:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:22.130 15:11:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:22.130 15:11:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:22.130 15:11:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:22.130 15:11:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:22.130 15:11:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:22.130 15:11:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:22.130 15:11:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:22.130 15:11:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:22.130 15:11:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:22.130 15:11:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:22.130 15:11:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:22.130 15:11:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:22.130 15:11:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:22.130 15:11:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:22.130 15:11:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:22.130 15:11:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:22.130 15:11:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:22.130 15:11:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:22.130 15:11:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:22.130 15:11:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:22.130 15:11:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:22.130 15:11:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:22.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:25:22.130 00:25:22.130 --- 10.0.0.2 ping statistics --- 00:25:22.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.130 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:25:22.130 15:11:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:22.130 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:22.130 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:25:22.130 00:25:22.130 --- 10.0.0.3 ping statistics --- 00:25:22.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.130 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:22.130 15:11:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:22.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:25:22.130 00:25:22.130 --- 10.0.0.1 ping statistics --- 00:25:22.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.130 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:22.394 15:11:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.394 15:11:45 -- nvmf/common.sh@421 -- # return 0 00:25:22.394 15:11:45 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:22.394 15:11:45 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:22.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:22.668 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:22.668 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:22.668 15:11:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.668 15:11:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:22.668 15:11:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:22.668 15:11:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.668 15:11:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:22.668 15:11:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:22.668 15:11:46 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:22.668 15:11:46 -- target/dif.sh@137 -- # nvmfappstart 00:25:22.668 15:11:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:22.668 15:11:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.668 15:11:46 -- common/autotest_common.sh@10 -- # set +x 00:25:22.668 15:11:46 -- nvmf/common.sh@469 -- # nvmfpid=102219 00:25:22.668 15:11:46 -- nvmf/common.sh@470 -- # waitforlisten 102219 00:25:22.668 15:11:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:22.668 15:11:46 -- common/autotest_common.sh@829 -- # '[' -z 102219 ']' 00:25:22.668 15:11:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.668 15:11:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.668 15:11:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.668 15:11:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.668 15:11:46 -- common/autotest_common.sh@10 -- # set +x 00:25:22.668 [2024-11-18 15:11:46.246283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:22.668 [2024-11-18 15:11:46.246403] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.942 [2024-11-18 15:11:46.383604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.942 [2024-11-18 15:11:46.440661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.942 [2024-11-18 15:11:46.440790] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.942 [2024-11-18 15:11:46.440802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.942 [2024-11-18 15:11:46.440817] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.942 [2024-11-18 15:11:46.440849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.885 15:11:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.885 15:11:47 -- common/autotest_common.sh@862 -- # return 0 00:25:23.885 15:11:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:23.885 15:11:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.885 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 15:11:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.885 15:11:47 -- target/dif.sh@139 -- # create_transport 00:25:23.885 15:11:47 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:23.885 15:11:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.885 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 [2024-11-18 15:11:47.312968] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.885 15:11:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.885 15:11:47 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:23.885 15:11:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:23.885 15:11:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:23.885 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 ************************************ 00:25:23.885 START TEST fio_dif_1_default 00:25:23.885 ************************************ 00:25:23.885 15:11:47 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:23.885 15:11:47 -- target/dif.sh@86 -- # create_subsystems 0 00:25:23.885 15:11:47 -- target/dif.sh@28 -- # local sub 00:25:23.885 15:11:47 -- target/dif.sh@30 -- # for sub in "$@" 00:25:23.885 15:11:47 -- target/dif.sh@31 -- # create_subsystem 0 00:25:23.885 15:11:47 -- target/dif.sh@18 -- # local sub_id=0 00:25:23.885 15:11:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:23.885 15:11:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.885 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 bdev_null0 00:25:23.885 15:11:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.885 15:11:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:23.885 15:11:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.885 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 15:11:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.885 15:11:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:23.885 15:11:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.885 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 15:11:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.885 15:11:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:23.885 15:11:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.885 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 [2024-11-18 15:11:47.357085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.885 15:11:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.885 15:11:47 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:23.885 15:11:47 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:23.885 15:11:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:23.885 15:11:47 -- nvmf/common.sh@520 -- # config=() 00:25:23.885 15:11:47 -- nvmf/common.sh@520 -- # local subsystem config 00:25:23.885 15:11:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.885 15:11:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:23.885 15:11:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.885 { 00:25:23.885 "params": { 00:25:23.885 "name": "Nvme$subsystem", 00:25:23.885 "trtype": "$TEST_TRANSPORT", 00:25:23.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.885 "adrfam": "ipv4", 00:25:23.885 "trsvcid": "$NVMF_PORT", 00:25:23.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.885 "hdgst": ${hdgst:-false}, 00:25:23.885 "ddgst": ${ddgst:-false} 00:25:23.885 }, 00:25:23.885 "method": "bdev_nvme_attach_controller" 00:25:23.885 } 00:25:23.885 EOF 00:25:23.885 )") 00:25:23.885 15:11:47 -- target/dif.sh@82 -- # gen_fio_conf 00:25:23.885 15:11:47 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:23.885 15:11:47 -- target/dif.sh@54 -- # local file 00:25:23.885 15:11:47 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:23.885 15:11:47 -- target/dif.sh@56 -- # cat 00:25:23.886 15:11:47 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:23.886 15:11:47 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:23.886 15:11:47 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:23.886 15:11:47 -- common/autotest_common.sh@1330 -- # shift 00:25:23.886 15:11:47 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:23.886 15:11:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:23.886 15:11:47 -- nvmf/common.sh@542 -- # cat 00:25:23.886 15:11:47 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:23.886 15:11:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:23.886 15:11:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:23.886 15:11:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:23.886 15:11:47 -- target/dif.sh@72 -- # (( file <= files )) 00:25:23.886 15:11:47 -- nvmf/common.sh@544 -- # jq . 00:25:23.886 15:11:47 -- nvmf/common.sh@545 -- # IFS=, 00:25:23.886 15:11:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:23.886 "params": { 00:25:23.886 "name": "Nvme0", 00:25:23.886 "trtype": "tcp", 00:25:23.886 "traddr": "10.0.0.2", 00:25:23.886 "adrfam": "ipv4", 00:25:23.886 "trsvcid": "4420", 00:25:23.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:23.886 "hdgst": false, 00:25:23.886 "ddgst": false 00:25:23.886 }, 00:25:23.886 "method": "bdev_nvme_attach_controller" 00:25:23.886 }' 00:25:23.886 15:11:47 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:23.886 15:11:47 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:23.886 15:11:47 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:23.886 15:11:47 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:23.886 15:11:47 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:23.886 15:11:47 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:23.886 15:11:47 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:23.886 15:11:47 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:23.886 15:11:47 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:23.886 15:11:47 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:24.144 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:24.144 fio-3.35 00:25:24.144 Starting 1 thread 00:25:24.403 [2024-11-18 15:11:47.985226] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:24.403 [2024-11-18 15:11:47.985289] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:36.614 00:25:36.614 filename0: (groupid=0, jobs=1): err= 0: pid=102302: Mon Nov 18 15:11:58 2024 00:25:36.614 read: IOPS=7497, BW=29.3MiB/s (30.7MB/s)(293MiB/10001msec) 00:25:36.614 slat (nsec): min=5777, max=49905, avg=6774.12, stdev=1854.22 00:25:36.614 clat (usec): min=346, max=42369, avg=513.14, stdev=2279.14 00:25:36.614 lat (usec): min=352, max=42378, avg=519.91, stdev=2279.20 00:25:36.614 clat percentiles (usec): 00:25:36.614 | 1.00th=[ 359], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 367], 00:25:36.614 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 383], 00:25:36.614 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 437], 00:25:36.614 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[41157], 99.95th=[41157], 00:25:36.614 | 99.99th=[42206] 00:25:36.614 bw ( KiB/s): min=10528, max=38208, per=100.00%, avg=30019.37, stdev=6939.59, samples=19 00:25:36.614 iops : min= 2632, max= 9552, avg=7504.84, stdev=1734.90, samples=19 00:25:36.614 lat (usec) : 500=98.32%, 750=1.36% 00:25:36.614 lat (msec) : 10=0.01%, 50=0.31% 00:25:36.614 cpu : usr=85.77%, sys=11.21%, ctx=240, majf=0, minf=0 00:25:36.614 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:36.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.614 issued rwts: total=74980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.614 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:36.614 00:25:36.614 Run status group 0 (all jobs): 00:25:36.614 READ: bw=29.3MiB/s (30.7MB/s), 29.3MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=293MiB (307MB), run=10001-10001msec 00:25:36.614 15:11:58 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:36.614 15:11:58 -- target/dif.sh@43 -- # local sub 00:25:36.614 15:11:58 -- target/dif.sh@45 -- # for sub in "$@" 00:25:36.615 15:11:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:36.615 15:11:58 -- target/dif.sh@36 -- # local sub_id=0 00:25:36.615 15:11:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 00:25:36.615 real 0m10.997s 00:25:36.615 user 0m9.197s 00:25:36.615 sys 0m1.404s 00:25:36.615 15:11:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 ************************************ 00:25:36.615 END TEST fio_dif_1_default 00:25:36.615 ************************************ 00:25:36.615 15:11:58 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:36.615 15:11:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:36.615 15:11:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 ************************************ 00:25:36.615 START TEST fio_dif_1_multi_subsystems 00:25:36.615 ************************************ 00:25:36.615 15:11:58 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:36.615 15:11:58 -- target/dif.sh@92 -- # local files=1 00:25:36.615 15:11:58 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:36.615 15:11:58 -- target/dif.sh@28 -- # local sub 00:25:36.615 15:11:58 -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.615 15:11:58 -- target/dif.sh@31 -- # create_subsystem 0 00:25:36.615 15:11:58 -- target/dif.sh@18 -- # local sub_id=0 00:25:36.615 15:11:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 bdev_null0 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 [2024-11-18 15:11:58.407875] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.615 15:11:58 -- target/dif.sh@31 -- # create_subsystem 1 00:25:36.615 15:11:58 -- target/dif.sh@18 -- # local sub_id=1 00:25:36.615 15:11:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 bdev_null1 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.615 15:11:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.615 15:11:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.615 15:11:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.615 15:11:58 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:36.615 15:11:58 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:36.615 15:11:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:36.615 15:11:58 -- nvmf/common.sh@520 -- # config=() 00:25:36.615 15:11:58 -- nvmf/common.sh@520 -- # local subsystem config 00:25:36.615 15:11:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.615 15:11:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.615 15:11:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.615 { 00:25:36.615 "params": { 00:25:36.615 "name": "Nvme$subsystem", 00:25:36.615 "trtype": "$TEST_TRANSPORT", 00:25:36.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.615 "adrfam": "ipv4", 00:25:36.615 "trsvcid": "$NVMF_PORT", 00:25:36.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.615 "hdgst": ${hdgst:-false}, 00:25:36.615 "ddgst": ${ddgst:-false} 00:25:36.615 }, 00:25:36.615 "method": "bdev_nvme_attach_controller" 00:25:36.615 } 00:25:36.615 EOF 00:25:36.615 )") 00:25:36.615 15:11:58 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.615 15:11:58 -- target/dif.sh@82 -- # gen_fio_conf 00:25:36.615 15:11:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:36.615 15:11:58 -- target/dif.sh@54 -- # local file 00:25:36.615 15:11:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:36.615 15:11:58 -- target/dif.sh@56 -- # cat 00:25:36.615 15:11:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:36.615 15:11:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:36.615 15:11:58 -- nvmf/common.sh@542 -- # cat 00:25:36.615 15:11:58 -- common/autotest_common.sh@1330 -- # shift 00:25:36.615 15:11:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:36.615 15:11:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.615 15:11:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:36.615 15:11:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:36.615 15:11:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:36.615 15:11:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:36.615 15:11:58 -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.615 15:11:58 -- target/dif.sh@73 -- # cat 00:25:36.615 15:11:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.615 15:11:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.615 { 00:25:36.615 "params": { 00:25:36.615 "name": "Nvme$subsystem", 00:25:36.615 "trtype": "$TEST_TRANSPORT", 00:25:36.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.615 "adrfam": "ipv4", 00:25:36.615 "trsvcid": "$NVMF_PORT", 00:25:36.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.615 "hdgst": ${hdgst:-false}, 00:25:36.615 "ddgst": ${ddgst:-false} 00:25:36.615 }, 00:25:36.615 "method": "bdev_nvme_attach_controller" 00:25:36.615 } 00:25:36.615 EOF 00:25:36.615 )") 00:25:36.615 15:11:58 -- nvmf/common.sh@542 -- # cat 00:25:36.615 15:11:58 -- target/dif.sh@72 -- # (( file++ )) 00:25:36.615 15:11:58 -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.615 15:11:58 -- nvmf/common.sh@544 -- # jq . 00:25:36.615 15:11:58 -- nvmf/common.sh@545 -- # IFS=, 00:25:36.615 15:11:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:36.615 "params": { 00:25:36.615 "name": "Nvme0", 00:25:36.615 "trtype": "tcp", 00:25:36.615 "traddr": "10.0.0.2", 00:25:36.615 "adrfam": "ipv4", 00:25:36.615 "trsvcid": "4420", 00:25:36.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:36.615 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:36.615 "hdgst": false, 00:25:36.615 "ddgst": false 00:25:36.615 }, 00:25:36.615 "method": "bdev_nvme_attach_controller" 00:25:36.615 },{ 00:25:36.615 "params": { 00:25:36.615 "name": "Nvme1", 00:25:36.615 "trtype": "tcp", 00:25:36.615 "traddr": "10.0.0.2", 00:25:36.615 "adrfam": "ipv4", 00:25:36.615 "trsvcid": "4420", 00:25:36.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:36.615 "hdgst": false, 00:25:36.615 "ddgst": false 00:25:36.615 }, 00:25:36.615 "method": "bdev_nvme_attach_controller" 00:25:36.615 }' 00:25:36.615 15:11:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:36.615 15:11:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:36.616 15:11:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.616 15:11:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:36.616 15:11:58 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:36.616 15:11:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:36.616 15:11:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:36.616 15:11:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:36.616 15:11:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:36.616 15:11:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.616 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:36.616 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:36.616 fio-3.35 00:25:36.616 Starting 2 threads 00:25:36.616 [2024-11-18 15:11:59.181900] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:36.616 [2024-11-18 15:11:59.181961] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:46.592 00:25:46.592 filename0: (groupid=0, jobs=1): err= 0: pid=102463: Mon Nov 18 15:12:09 2024 00:25:46.592 read: IOPS=219, BW=877KiB/s (898kB/s)(8784KiB/10012msec) 00:25:46.592 slat (nsec): min=5799, max=79369, avg=9396.24, stdev=5812.71 00:25:46.592 clat (usec): min=354, max=42665, avg=18207.60, stdev=20094.46 00:25:46.592 lat (usec): min=360, max=42688, avg=18216.99, stdev=20094.44 00:25:46.592 clat percentiles (usec): 00:25:46.592 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 396], 00:25:46.592 | 30.00th=[ 412], 40.00th=[ 433], 50.00th=[ 498], 60.00th=[40633], 00:25:46.592 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:46.592 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:25:46.592 | 99.99th=[42730] 00:25:46.592 bw ( KiB/s): min= 416, max= 1280, per=51.21%, avg=876.70, stdev=217.37, samples=20 00:25:46.592 iops : min= 104, max= 320, avg=219.15, stdev=54.34, samples=20 00:25:46.592 lat (usec) : 500=50.05%, 750=4.55%, 1000=1.32% 00:25:46.592 lat (msec) : 2=0.18%, 50=43.90% 00:25:46.592 cpu : usr=97.79%, sys=1.81%, ctx=10, majf=0, minf=0 00:25:46.592 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.592 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.592 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:46.592 filename1: (groupid=0, jobs=1): err= 0: pid=102464: Mon Nov 18 15:12:09 2024 00:25:46.592 read: IOPS=208, BW=835KiB/s (855kB/s)(8384KiB/10036msec) 00:25:46.592 slat (nsec): min=5907, max=59620, avg=9512.53, stdev=5752.88 00:25:46.592 clat (usec): min=347, max=42387, avg=19123.22, stdev=20182.84 00:25:46.592 lat (usec): min=353, max=42395, avg=19132.73, stdev=20182.90 00:25:46.592 clat percentiles (usec): 00:25:46.592 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 392], 00:25:46.592 | 30.00th=[ 408], 40.00th=[ 429], 50.00th=[ 627], 60.00th=[40633], 00:25:46.592 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:46.592 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:46.592 | 99.99th=[42206] 00:25:46.592 bw ( KiB/s): min= 576, max= 1216, per=48.87%, avg=836.70, stdev=171.93, samples=20 00:25:46.592 iops : min= 144, max= 304, avg=209.15, stdev=42.98, samples=20 00:25:46.592 lat (usec) : 500=48.38%, 750=4.10%, 1000=1.15% 00:25:46.592 lat (msec) : 2=0.19%, 50=46.18% 00:25:46.592 cpu : usr=97.49%, sys=2.12%, ctx=11, majf=0, minf=9 00:25:46.592 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.592 issued rwts: total=2096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.592 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:46.592 00:25:46.592 Run status group 0 (all jobs): 00:25:46.592 READ: bw=1711KiB/s (1752kB/s), 835KiB/s-877KiB/s (855kB/s-898kB/s), io=16.8MiB (17.6MB), run=10012-10036msec 00:25:46.592 15:12:09 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:46.592 15:12:09 -- target/dif.sh@43 -- # local sub 00:25:46.592 15:12:09 -- target/dif.sh@45 -- # for sub in "$@" 00:25:46.592 15:12:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:46.592 15:12:09 -- target/dif.sh@36 -- # local sub_id=0 00:25:46.592 15:12:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:46.592 15:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.592 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.592 15:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.592 15:12:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:46.592 15:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.592 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.592 15:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.592 15:12:09 -- target/dif.sh@45 -- # for sub in "$@" 00:25:46.592 15:12:09 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:46.592 15:12:09 -- target/dif.sh@36 -- # local sub_id=1 00:25:46.592 15:12:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.592 15:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.592 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.592 15:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.592 15:12:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:46.592 15:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.592 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.592 15:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.592 00:25:46.592 real 0m11.201s 00:25:46.592 user 0m20.417s 00:25:46.592 sys 0m0.714s 00:25:46.592 15:12:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:46.592 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.592 ************************************ 00:25:46.592 END TEST fio_dif_1_multi_subsystems 00:25:46.592 ************************************ 00:25:46.592 15:12:09 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:46.592 15:12:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:46.592 15:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:46.592 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.592 ************************************ 00:25:46.592 START TEST fio_dif_rand_params 00:25:46.592 ************************************ 00:25:46.592 15:12:09 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:25:46.592 15:12:09 -- target/dif.sh@100 -- # local NULL_DIF 00:25:46.592 15:12:09 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:46.592 15:12:09 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:46.593 15:12:09 -- target/dif.sh@103 -- # bs=128k 00:25:46.593 15:12:09 -- target/dif.sh@103 -- # numjobs=3 00:25:46.593 15:12:09 -- target/dif.sh@103 -- # iodepth=3 00:25:46.593 15:12:09 -- target/dif.sh@103 -- # runtime=5 00:25:46.593 15:12:09 -- target/dif.sh@105 -- # create_subsystems 0 00:25:46.593 15:12:09 -- target/dif.sh@28 -- # local sub 00:25:46.593 15:12:09 -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.593 15:12:09 -- target/dif.sh@31 -- # create_subsystem 0 00:25:46.593 15:12:09 -- target/dif.sh@18 -- # local sub_id=0 00:25:46.593 15:12:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:46.593 15:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.593 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.593 bdev_null0 00:25:46.593 15:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.593 15:12:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:46.593 15:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.593 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.593 15:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.593 15:12:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:46.593 15:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.593 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.593 15:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.593 15:12:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.593 15:12:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.593 15:12:09 -- common/autotest_common.sh@10 -- # set +x 00:25:46.593 [2024-11-18 15:12:09.667802] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.593 15:12:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.593 15:12:09 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:46.593 15:12:09 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:46.593 15:12:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:46.593 15:12:09 -- nvmf/common.sh@520 -- # config=() 00:25:46.593 15:12:09 -- nvmf/common.sh@520 -- # local subsystem config 00:25:46.593 15:12:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.593 15:12:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.593 { 00:25:46.593 "params": { 00:25:46.593 "name": "Nvme$subsystem", 00:25:46.593 "trtype": "$TEST_TRANSPORT", 00:25:46.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.593 "adrfam": "ipv4", 00:25:46.593 "trsvcid": "$NVMF_PORT", 00:25:46.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.593 "hdgst": ${hdgst:-false}, 00:25:46.593 "ddgst": ${ddgst:-false} 00:25:46.593 }, 00:25:46.593 "method": "bdev_nvme_attach_controller" 00:25:46.593 } 00:25:46.593 EOF 00:25:46.593 )") 00:25:46.593 15:12:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.593 15:12:09 -- target/dif.sh@82 -- # gen_fio_conf 00:25:46.593 15:12:09 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.593 15:12:09 -- target/dif.sh@54 -- # local file 00:25:46.593 15:12:09 -- target/dif.sh@56 -- # cat 00:25:46.593 15:12:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:46.593 15:12:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.593 15:12:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:46.593 15:12:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.593 15:12:09 -- common/autotest_common.sh@1330 -- # shift 00:25:46.593 15:12:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:46.593 15:12:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.593 15:12:09 -- nvmf/common.sh@542 -- # cat 00:25:46.593 15:12:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.593 15:12:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:46.593 15:12:09 -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.593 15:12:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:46.593 15:12:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:46.593 15:12:09 -- nvmf/common.sh@544 -- # jq . 00:25:46.593 15:12:09 -- nvmf/common.sh@545 -- # IFS=, 00:25:46.593 15:12:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:46.593 "params": { 00:25:46.593 "name": "Nvme0", 00:25:46.593 "trtype": "tcp", 00:25:46.593 "traddr": "10.0.0.2", 00:25:46.593 "adrfam": "ipv4", 00:25:46.593 "trsvcid": "4420", 00:25:46.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.593 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:46.593 "hdgst": false, 00:25:46.593 "ddgst": false 00:25:46.593 }, 00:25:46.593 "method": "bdev_nvme_attach_controller" 00:25:46.593 }' 00:25:46.593 15:12:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:46.593 15:12:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:46.593 15:12:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.593 15:12:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.593 15:12:09 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:46.593 15:12:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:46.593 15:12:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:46.593 15:12:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:46.593 15:12:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:46.593 15:12:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.593 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:46.593 ... 00:25:46.593 fio-3.35 00:25:46.593 Starting 3 threads 00:25:46.852 [2024-11-18 15:12:10.308348] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:46.852 [2024-11-18 15:12:10.308409] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:52.121 00:25:52.121 filename0: (groupid=0, jobs=1): err= 0: pid=102621: Mon Nov 18 15:12:15 2024 00:25:52.121 read: IOPS=334, BW=41.8MiB/s (43.8MB/s)(209MiB/5004msec) 00:25:52.121 slat (usec): min=5, max=378, avg= 9.90, stdev=10.58 00:25:52.121 clat (usec): min=3401, max=49356, avg=8941.71, stdev=3611.55 00:25:52.121 lat (usec): min=3408, max=49362, avg=8951.62, stdev=3611.88 00:25:52.121 clat percentiles (usec): 00:25:52.121 | 1.00th=[ 3458], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 6783], 00:25:52.121 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[11076], 00:25:52.121 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12518], 95.00th=[13042], 00:25:52.121 | 99.00th=[13829], 99.50th=[14222], 99.90th=[47973], 99.95th=[49546], 00:25:52.121 | 99.99th=[49546] 00:25:52.121 bw ( KiB/s): min=36096, max=56832, per=38.31%, avg=42250.00, stdev=6645.25, samples=9 00:25:52.121 iops : min= 282, max= 444, avg=330.00, stdev=51.87, samples=9 00:25:52.121 lat (msec) : 4=16.97%, 10=38.77%, 20=44.09%, 50=0.18% 00:25:52.121 cpu : usr=93.80%, sys=4.52%, ctx=5, majf=0, minf=9 00:25:52.121 IO depths : 1=32.4%, 2=67.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.121 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.121 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:52.121 filename0: (groupid=0, jobs=1): err= 0: pid=102622: Mon Nov 18 15:12:15 2024 00:25:52.121 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(179MiB/5046msec) 00:25:52.121 slat (nsec): min=5407, max=47373, avg=12950.28, stdev=5971.99 00:25:52.121 clat (usec): min=3802, max=51517, avg=10489.93, stdev=9818.77 00:25:52.121 lat (usec): min=3815, max=51529, avg=10502.88, stdev=9818.78 00:25:52.121 clat percentiles (usec): 00:25:52.121 | 1.00th=[ 5014], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6652], 00:25:52.121 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:25:52.121 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[46924], 00:25:52.121 | 99.00th=[50070], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:25:52.121 | 99.99th=[51643] 00:25:52.121 bw ( KiB/s): min=25344, max=49152, per=33.23%, avg=36653.60, stdev=6974.95, samples=10 00:25:52.121 iops : min= 198, max= 384, avg=286.30, stdev=54.57, samples=10 00:25:52.121 lat (msec) : 4=0.21%, 10=91.07%, 20=2.58%, 50=5.51%, 100=0.63% 00:25:52.121 cpu : usr=93.99%, sys=4.50%, ctx=6, majf=0, minf=0 00:25:52.121 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.121 issued rwts: total=1433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.121 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:52.121 filename0: (groupid=0, jobs=1): err= 0: pid=102623: Mon Nov 18 15:12:15 2024 00:25:52.121 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(155MiB/5003msec) 00:25:52.121 slat (nsec): min=6031, max=55074, avg=15449.83, stdev=6926.32 00:25:52.121 clat (usec): min=3233, max=53347, avg=12073.11, stdev=10319.83 00:25:52.121 lat (usec): min=3243, max=53367, avg=12088.56, stdev=10320.10 00:25:52.121 clat percentiles (usec): 00:25:52.121 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6194], 20.00th=[ 6783], 00:25:52.121 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10683], 00:25:52.121 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12387], 95.00th=[47973], 00:25:52.121 | 99.00th=[51643], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:25:52.121 | 99.99th=[53216] 00:25:52.121 bw ( KiB/s): min=27392, max=38912, per=29.27%, avg=32284.44, stdev=4280.72, samples=9 00:25:52.122 iops : min= 214, max= 304, avg=252.22, stdev=33.44, samples=9 00:25:52.122 lat (msec) : 4=0.89%, 10=44.08%, 20=48.27%, 50=3.38%, 100=3.38% 00:25:52.122 cpu : usr=94.56%, sys=4.26%, ctx=7, majf=0, minf=0 00:25:52.122 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:52.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:52.122 issued rwts: total=1241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:52.122 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:52.122 00:25:52.122 Run status group 0 (all jobs): 00:25:52.122 READ: bw=108MiB/s (113MB/s), 31.0MiB/s-41.8MiB/s (32.5MB/s-43.8MB/s), io=544MiB (570MB), run=5003-5046msec 00:25:52.122 15:12:15 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:52.122 15:12:15 -- target/dif.sh@43 -- # local sub 00:25:52.122 15:12:15 -- target/dif.sh@45 -- # for sub in "$@" 00:25:52.122 15:12:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:52.122 15:12:15 -- target/dif.sh@36 -- # local sub_id=0 00:25:52.122 15:12:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:52.122 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.122 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.122 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.122 15:12:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:52.122 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.122 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.122 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.122 15:12:15 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:52.122 15:12:15 -- target/dif.sh@109 -- # bs=4k 00:25:52.122 15:12:15 -- target/dif.sh@109 -- # numjobs=8 00:25:52.122 15:12:15 -- target/dif.sh@109 -- # iodepth=16 00:25:52.122 15:12:15 -- target/dif.sh@109 -- # runtime= 00:25:52.122 15:12:15 -- target/dif.sh@109 -- # files=2 00:25:52.122 15:12:15 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:52.122 15:12:15 -- target/dif.sh@28 -- # local sub 00:25:52.122 15:12:15 -- target/dif.sh@30 -- # for sub in "$@" 00:25:52.122 15:12:15 -- target/dif.sh@31 -- # create_subsystem 0 00:25:52.122 15:12:15 -- target/dif.sh@18 -- # local sub_id=0 00:25:52.122 15:12:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:52.122 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.122 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 bdev_null0 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 [2024-11-18 15:12:15.733901] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@30 -- # for sub in "$@" 00:25:52.381 15:12:15 -- target/dif.sh@31 -- # create_subsystem 1 00:25:52.381 15:12:15 -- target/dif.sh@18 -- # local sub_id=1 00:25:52.381 15:12:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 bdev_null1 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@30 -- # for sub in "$@" 00:25:52.381 15:12:15 -- target/dif.sh@31 -- # create_subsystem 2 00:25:52.381 15:12:15 -- target/dif.sh@18 -- # local sub_id=2 00:25:52.381 15:12:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 bdev_null2 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:52.381 15:12:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.381 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:25:52.381 15:12:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.381 15:12:15 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:52.381 15:12:15 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:52.381 15:12:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:52.381 15:12:15 -- nvmf/common.sh@520 -- # config=() 00:25:52.381 15:12:15 -- nvmf/common.sh@520 -- # local subsystem config 00:25:52.381 15:12:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:52.381 15:12:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:52.381 { 00:25:52.381 "params": { 00:25:52.381 "name": "Nvme$subsystem", 00:25:52.381 "trtype": "$TEST_TRANSPORT", 00:25:52.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:52.381 "adrfam": "ipv4", 00:25:52.381 "trsvcid": "$NVMF_PORT", 00:25:52.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:52.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:52.381 "hdgst": ${hdgst:-false}, 00:25:52.381 "ddgst": ${ddgst:-false} 00:25:52.381 }, 00:25:52.381 "method": "bdev_nvme_attach_controller" 00:25:52.381 } 00:25:52.381 EOF 00:25:52.381 )") 00:25:52.381 15:12:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:52.381 15:12:15 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:52.381 15:12:15 -- target/dif.sh@82 -- # gen_fio_conf 00:25:52.381 15:12:15 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:52.381 15:12:15 -- target/dif.sh@54 -- # local file 00:25:52.381 15:12:15 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:52.381 15:12:15 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:52.381 15:12:15 -- target/dif.sh@56 -- # cat 00:25:52.381 15:12:15 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:52.381 15:12:15 -- common/autotest_common.sh@1330 -- # shift 00:25:52.381 15:12:15 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:52.381 15:12:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:52.382 15:12:15 -- nvmf/common.sh@542 -- # cat 00:25:52.382 15:12:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:52.382 15:12:15 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:52.382 15:12:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:52.382 15:12:15 -- target/dif.sh@72 -- # (( file <= files )) 00:25:52.382 15:12:15 -- target/dif.sh@73 -- # cat 00:25:52.382 15:12:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:52.382 15:12:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:52.382 15:12:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:52.382 { 00:25:52.382 "params": { 00:25:52.382 "name": "Nvme$subsystem", 00:25:52.382 "trtype": "$TEST_TRANSPORT", 00:25:52.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:52.382 "adrfam": "ipv4", 00:25:52.382 "trsvcid": "$NVMF_PORT", 00:25:52.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:52.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:52.382 "hdgst": ${hdgst:-false}, 00:25:52.382 "ddgst": ${ddgst:-false} 00:25:52.382 }, 00:25:52.382 "method": "bdev_nvme_attach_controller" 00:25:52.382 } 00:25:52.382 EOF 00:25:52.382 )") 00:25:52.382 15:12:15 -- target/dif.sh@72 -- # (( file++ )) 00:25:52.382 15:12:15 -- target/dif.sh@72 -- # (( file <= files )) 00:25:52.382 15:12:15 -- target/dif.sh@73 -- # cat 00:25:52.382 15:12:15 -- nvmf/common.sh@542 -- # cat 00:25:52.382 15:12:15 -- target/dif.sh@72 -- # (( file++ )) 00:25:52.382 15:12:15 -- target/dif.sh@72 -- # (( file <= files )) 00:25:52.382 15:12:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:52.382 15:12:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:52.382 { 00:25:52.382 "params": { 00:25:52.382 "name": "Nvme$subsystem", 00:25:52.382 "trtype": "$TEST_TRANSPORT", 00:25:52.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:52.382 "adrfam": "ipv4", 00:25:52.382 "trsvcid": "$NVMF_PORT", 00:25:52.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:52.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:52.382 "hdgst": ${hdgst:-false}, 00:25:52.382 "ddgst": ${ddgst:-false} 00:25:52.382 }, 00:25:52.382 "method": "bdev_nvme_attach_controller" 00:25:52.382 } 00:25:52.382 EOF 00:25:52.382 )") 00:25:52.382 15:12:15 -- nvmf/common.sh@542 -- # cat 00:25:52.382 15:12:15 -- nvmf/common.sh@544 -- # jq . 00:25:52.382 15:12:15 -- nvmf/common.sh@545 -- # IFS=, 00:25:52.382 15:12:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:52.382 "params": { 00:25:52.382 "name": "Nvme0", 00:25:52.382 "trtype": "tcp", 00:25:52.382 "traddr": "10.0.0.2", 00:25:52.382 "adrfam": "ipv4", 00:25:52.382 "trsvcid": "4420", 00:25:52.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:52.382 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:52.382 "hdgst": false, 00:25:52.382 "ddgst": false 00:25:52.382 }, 00:25:52.382 "method": "bdev_nvme_attach_controller" 00:25:52.382 },{ 00:25:52.382 "params": { 00:25:52.382 "name": "Nvme1", 00:25:52.382 "trtype": "tcp", 00:25:52.382 "traddr": "10.0.0.2", 00:25:52.382 "adrfam": "ipv4", 00:25:52.382 "trsvcid": "4420", 00:25:52.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:52.382 "hdgst": false, 00:25:52.382 "ddgst": false 00:25:52.382 }, 00:25:52.382 "method": "bdev_nvme_attach_controller" 00:25:52.382 },{ 00:25:52.382 "params": { 00:25:52.382 "name": "Nvme2", 00:25:52.382 "trtype": "tcp", 00:25:52.382 "traddr": "10.0.0.2", 00:25:52.382 "adrfam": "ipv4", 00:25:52.382 "trsvcid": "4420", 00:25:52.382 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:52.382 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:52.382 "hdgst": false, 00:25:52.382 "ddgst": false 00:25:52.382 }, 00:25:52.382 "method": "bdev_nvme_attach_controller" 00:25:52.382 }' 00:25:52.382 15:12:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:52.382 15:12:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:52.382 15:12:15 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:52.382 15:12:15 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:52.382 15:12:15 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:52.382 15:12:15 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:52.382 15:12:15 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:52.382 15:12:15 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:52.382 15:12:15 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:52.382 15:12:15 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:52.640 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:52.640 ... 00:25:52.640 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:52.640 ... 00:25:52.640 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:52.640 ... 00:25:52.640 fio-3.35 00:25:52.640 Starting 24 threads 00:25:53.208 [2024-11-18 15:12:16.662969] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:53.208 [2024-11-18 15:12:16.663014] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:05.419 00:26:05.419 filename0: (groupid=0, jobs=1): err= 0: pid=102724: Mon Nov 18 15:12:26 2024 00:26:05.419 read: IOPS=233, BW=934KiB/s (957kB/s)(9352KiB/10009msec) 00:26:05.419 slat (usec): min=4, max=8030, avg=18.78, stdev=234.53 00:26:05.419 clat (msec): min=9, max=143, avg=68.31, stdev=19.87 00:26:05.419 lat (msec): min=9, max=143, avg=68.33, stdev=19.87 00:26:05.419 clat percentiles (msec): 00:26:05.419 | 1.00th=[ 24], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 52], 00:26:05.419 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 72], 00:26:05.419 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 108], 00:26:05.419 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:26:05.419 | 99.99th=[ 144] 00:26:05.419 bw ( KiB/s): min= 768, max= 1072, per=3.70%, avg=918.32, stdev=76.44, samples=19 00:26:05.419 iops : min= 192, max= 268, avg=229.58, stdev=19.11, samples=19 00:26:05.419 lat (msec) : 10=0.21%, 20=0.56%, 50=16.89%, 100=76.22%, 250=6.12% 00:26:05.419 cpu : usr=32.82%, sys=0.42%, ctx=855, majf=0, minf=9 00:26:05.419 IO depths : 1=2.1%, 2=4.7%, 4=14.3%, 8=67.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:05.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.419 complete : 0=0.0%, 4=91.2%, 8=3.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.419 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.419 filename0: (groupid=0, jobs=1): err= 0: pid=102725: Mon Nov 18 15:12:26 2024 00:26:05.419 read: IOPS=239, BW=959KiB/s (982kB/s)(9596KiB/10004msec) 00:26:05.419 slat (usec): min=4, max=8068, avg=23.82, stdev=286.82 00:26:05.419 clat (msec): min=5, max=142, avg=66.57, stdev=20.96 00:26:05.419 lat (msec): min=5, max=142, avg=66.60, stdev=20.97 00:26:05.419 clat percentiles (msec): 00:26:05.419 | 1.00th=[ 13], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 50], 00:26:05.419 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 70], 00:26:05.419 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 106], 00:26:05.419 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:26:05.419 | 99.99th=[ 144] 00:26:05.419 bw ( KiB/s): min= 768, max= 1112, per=3.79%, avg=942.79, stdev=104.19, samples=19 00:26:05.419 iops : min= 192, max= 278, avg=235.68, stdev=26.05, samples=19 00:26:05.419 lat (msec) : 10=0.92%, 20=0.17%, 50=20.76%, 100=71.90%, 250=6.25% 00:26:05.419 cpu : usr=32.64%, sys=0.53%, ctx=881, majf=0, minf=9 00:26:05.419 IO depths : 1=1.0%, 2=2.5%, 4=11.5%, 8=72.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:05.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.419 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.420 filename0: (groupid=0, jobs=1): err= 0: pid=102726: Mon Nov 18 15:12:26 2024 00:26:05.420 read: IOPS=252, BW=1009KiB/s (1033kB/s)(9.88MiB/10026msec) 00:26:05.420 slat (usec): min=4, max=11018, avg=34.13, stdev=380.63 00:26:05.420 clat (msec): min=23, max=129, avg=63.19, stdev=18.18 00:26:05.420 lat (msec): min=23, max=129, avg=63.23, stdev=18.19 00:26:05.420 clat percentiles (msec): 00:26:05.420 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:26:05.420 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:26:05.420 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 97], 00:26:05.420 | 99.00th=[ 111], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:26:05.420 | 99.99th=[ 130] 00:26:05.420 bw ( KiB/s): min= 880, max= 1208, per=4.06%, avg=1008.26, stdev=95.51, samples=19 00:26:05.420 iops : min= 220, max= 302, avg=252.05, stdev=23.86, samples=19 00:26:05.420 lat (msec) : 50=23.69%, 100=72.87%, 250=3.44% 00:26:05.420 cpu : usr=39.38%, sys=0.61%, ctx=1156, majf=0, minf=9 00:26:05.420 IO depths : 1=1.6%, 2=4.0%, 4=11.9%, 8=70.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:05.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 complete : 0=0.0%, 4=90.9%, 8=4.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 issued rwts: total=2529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.420 filename0: (groupid=0, jobs=1): err= 0: pid=102727: Mon Nov 18 15:12:26 2024 00:26:05.420 read: IOPS=246, BW=987KiB/s (1011kB/s)(9896KiB/10024msec) 00:26:05.420 slat (usec): min=5, max=4040, avg=14.08, stdev=81.38 00:26:05.420 clat (msec): min=12, max=138, avg=64.65, stdev=19.22 00:26:05.420 lat (msec): min=12, max=138, avg=64.66, stdev=19.22 00:26:05.420 clat percentiles (msec): 00:26:05.420 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 50], 00:26:05.420 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 68], 00:26:05.420 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 101], 00:26:05.420 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 140], 00:26:05.420 | 99.99th=[ 140] 00:26:05.420 bw ( KiB/s): min= 766, max= 1280, per=3.96%, avg=983.10, stdev=164.80, samples=20 00:26:05.420 iops : min= 191, max= 320, avg=245.75, stdev=41.24, samples=20 00:26:05.420 lat (msec) : 20=0.65%, 50=20.70%, 100=73.52%, 250=5.13% 00:26:05.420 cpu : usr=39.55%, sys=0.60%, ctx=1161, majf=0, minf=9 00:26:05.420 IO depths : 1=2.3%, 2=5.1%, 4=14.7%, 8=67.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:05.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.420 filename0: (groupid=0, jobs=1): err= 0: pid=102728: Mon Nov 18 15:12:26 2024 00:26:05.420 read: IOPS=247, BW=990KiB/s (1013kB/s)(9908KiB/10013msec) 00:26:05.420 slat (usec): min=3, max=6019, avg=16.60, stdev=139.69 00:26:05.420 clat (msec): min=30, max=131, avg=64.52, stdev=15.79 00:26:05.420 lat (msec): min=30, max=131, avg=64.54, stdev=15.79 00:26:05.420 clat percentiles (msec): 00:26:05.420 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 53], 00:26:05.420 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:26:05.420 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 94], 00:26:05.420 | 99.00th=[ 110], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 132], 00:26:05.420 | 99.99th=[ 132] 00:26:05.420 bw ( KiB/s): min= 896, max= 1208, per=3.98%, avg=989.16, stdev=79.05, samples=19 00:26:05.420 iops : min= 224, max= 302, avg=247.26, stdev=19.72, samples=19 00:26:05.420 lat (msec) : 50=18.09%, 100=79.85%, 250=2.06% 00:26:05.420 cpu : usr=42.33%, sys=0.53%, ctx=1012, majf=0, minf=9 00:26:05.420 IO depths : 1=2.3%, 2=5.2%, 4=14.6%, 8=67.5%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:05.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 issued rwts: total=2477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.420 filename0: (groupid=0, jobs=1): err= 0: pid=102729: Mon Nov 18 15:12:26 2024 00:26:05.420 read: IOPS=301, BW=1204KiB/s (1233kB/s)(11.8MiB/10029msec) 00:26:05.420 slat (usec): min=4, max=8018, avg=13.66, stdev=146.18 00:26:05.420 clat (msec): min=24, max=119, avg=53.03, stdev=16.80 00:26:05.420 lat (msec): min=24, max=119, avg=53.04, stdev=16.80 00:26:05.420 clat percentiles (msec): 00:26:05.420 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:26:05.420 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 56], 00:26:05.420 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 73], 95.00th=[ 86], 00:26:05.420 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 120], 99.95th=[ 120], 00:26:05.420 | 99.99th=[ 120] 00:26:05.420 bw ( KiB/s): min= 736, max= 1440, per=4.85%, avg=1203.65, stdev=180.26, samples=20 00:26:05.420 iops : min= 184, max= 360, avg=300.90, stdev=45.08, samples=20 00:26:05.420 lat (msec) : 50=51.24%, 100=47.70%, 250=1.06% 00:26:05.420 cpu : usr=43.86%, sys=0.52%, ctx=1269, majf=0, minf=9 00:26:05.420 IO depths : 1=0.5%, 2=1.1%, 4=6.6%, 8=78.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:26:05.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 complete : 0=0.0%, 4=89.2%, 8=6.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 issued rwts: total=3019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.420 filename0: (groupid=0, jobs=1): err= 0: pid=102730: Mon Nov 18 15:12:26 2024 00:26:05.420 read: IOPS=229, BW=917KiB/s (939kB/s)(9180KiB/10016msec) 00:26:05.420 slat (usec): min=4, max=8045, avg=32.96, stdev=374.42 00:26:05.420 clat (msec): min=24, max=144, avg=69.55, stdev=18.76 00:26:05.420 lat (msec): min=24, max=144, avg=69.59, stdev=18.75 00:26:05.420 clat percentiles (msec): 00:26:05.420 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 56], 00:26:05.420 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:26:05.420 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 106], 00:26:05.420 | 99.00th=[ 120], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:26:05.420 | 99.99th=[ 144] 00:26:05.420 bw ( KiB/s): min= 768, max= 1072, per=3.65%, avg=907.79, stdev=94.97, samples=19 00:26:05.420 iops : min= 192, max= 268, avg=226.95, stdev=23.74, samples=19 00:26:05.420 lat (msec) : 50=11.98%, 100=81.00%, 250=7.02% 00:26:05.420 cpu : usr=33.82%, sys=0.62%, ctx=937, majf=0, minf=9 00:26:05.420 IO depths : 1=2.2%, 2=5.2%, 4=15.6%, 8=66.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:05.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 complete : 0=0.0%, 4=91.5%, 8=3.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 issued rwts: total=2295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.420 filename0: (groupid=0, jobs=1): err= 0: pid=102731: Mon Nov 18 15:12:26 2024 00:26:05.420 read: IOPS=232, BW=932KiB/s (954kB/s)(9328KiB/10012msec) 00:26:05.420 slat (usec): min=3, max=8046, avg=26.63, stdev=304.41 00:26:05.420 clat (msec): min=31, max=142, avg=68.52, stdev=17.27 00:26:05.420 lat (msec): min=31, max=142, avg=68.55, stdev=17.28 00:26:05.420 clat percentiles (msec): 00:26:05.420 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 51], 20.00th=[ 56], 00:26:05.420 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 72], 00:26:05.420 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 99], 00:26:05.420 | 99.00th=[ 121], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:26:05.420 | 99.99th=[ 144] 00:26:05.420 bw ( KiB/s): min= 768, max= 1024, per=3.71%, avg=921.26, stdev=71.77, samples=19 00:26:05.420 iops : min= 192, max= 256, avg=230.32, stdev=17.94, samples=19 00:26:05.420 lat (msec) : 50=8.62%, 100=86.79%, 250=4.59% 00:26:05.420 cpu : usr=41.77%, sys=0.56%, ctx=1273, majf=0, minf=9 00:26:05.420 IO depths : 1=3.3%, 2=7.3%, 4=18.2%, 8=61.8%, 16=9.4%, 32=0.0%, >=64=0.0% 00:26:05.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 issued rwts: total=2332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.420 filename1: (groupid=0, jobs=1): err= 0: pid=102732: Mon Nov 18 15:12:26 2024 00:26:05.420 read: IOPS=246, BW=984KiB/s (1008kB/s)(9848KiB/10005msec) 00:26:05.420 slat (usec): min=4, max=8031, avg=15.90, stdev=161.81 00:26:05.420 clat (msec): min=10, max=131, avg=64.91, stdev=18.50 00:26:05.420 lat (msec): min=10, max=131, avg=64.93, stdev=18.50 00:26:05.420 clat percentiles (msec): 00:26:05.420 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 50], 00:26:05.420 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 66], 00:26:05.420 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 96], 00:26:05.420 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 121], 00:26:05.420 | 99.99th=[ 132] 00:26:05.420 bw ( KiB/s): min= 768, max= 1200, per=3.91%, avg=971.37, stdev=105.64, samples=19 00:26:05.420 iops : min= 192, max= 300, avg=242.84, stdev=26.41, samples=19 00:26:05.420 lat (msec) : 20=0.65%, 50=19.58%, 100=75.75%, 250=4.02% 00:26:05.420 cpu : usr=32.80%, sys=0.45%, ctx=864, majf=0, minf=9 00:26:05.420 IO depths : 1=1.5%, 2=3.8%, 4=12.3%, 8=70.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:05.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 complete : 0=0.0%, 4=91.0%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.420 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.420 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.420 filename1: (groupid=0, jobs=1): err= 0: pid=102733: Mon Nov 18 15:12:26 2024 00:26:05.420 read: IOPS=237, BW=948KiB/s (971kB/s)(9500KiB/10017msec) 00:26:05.420 slat (usec): min=4, max=8036, avg=18.91, stdev=180.40 00:26:05.420 clat (msec): min=24, max=139, avg=67.33, stdev=18.49 00:26:05.420 lat (msec): min=24, max=139, avg=67.35, stdev=18.49 00:26:05.420 clat percentiles (msec): 00:26:05.420 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 53], 00:26:05.420 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:26:05.420 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 99], 00:26:05.421 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 140], 99.95th=[ 140], 00:26:05.421 | 99.99th=[ 140] 00:26:05.421 bw ( KiB/s): min= 768, max= 1152, per=3.75%, avg=930.16, stdev=82.12, samples=19 00:26:05.421 iops : min= 192, max= 288, avg=232.53, stdev=20.54, samples=19 00:26:05.421 lat (msec) : 50=17.05%, 100=78.57%, 250=4.38% 00:26:05.421 cpu : usr=36.36%, sys=0.54%, ctx=1188, majf=0, minf=9 00:26:05.421 IO depths : 1=2.6%, 2=6.2%, 4=17.8%, 8=63.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:05.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 complete : 0=0.0%, 4=92.1%, 8=2.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.421 filename1: (groupid=0, jobs=1): err= 0: pid=102734: Mon Nov 18 15:12:26 2024 00:26:05.421 read: IOPS=253, BW=1015KiB/s (1040kB/s)(9.95MiB/10032msec) 00:26:05.421 slat (usec): min=4, max=8062, avg=25.92, stdev=318.24 00:26:05.421 clat (msec): min=19, max=134, avg=62.90, stdev=17.69 00:26:05.421 lat (msec): min=19, max=134, avg=62.93, stdev=17.70 00:26:05.421 clat percentiles (msec): 00:26:05.421 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:26:05.421 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:26:05.421 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 96], 00:26:05.421 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:26:05.421 | 99.99th=[ 136] 00:26:05.421 bw ( KiB/s): min= 728, max= 1248, per=4.08%, avg=1012.00, stdev=121.95, samples=20 00:26:05.421 iops : min= 182, max= 312, avg=253.00, stdev=30.49, samples=20 00:26:05.421 lat (msec) : 20=0.27%, 50=26.39%, 100=69.76%, 250=3.57% 00:26:05.421 cpu : usr=32.47%, sys=0.65%, ctx=889, majf=0, minf=9 00:26:05.421 IO depths : 1=1.1%, 2=2.8%, 4=10.8%, 8=73.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:05.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 issued rwts: total=2546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.421 filename1: (groupid=0, jobs=1): err= 0: pid=102735: Mon Nov 18 15:12:26 2024 00:26:05.421 read: IOPS=235, BW=940KiB/s (963kB/s)(9416KiB/10012msec) 00:26:05.421 slat (usec): min=4, max=8036, avg=20.70, stdev=247.87 00:26:05.421 clat (msec): min=27, max=143, avg=67.86, stdev=19.00 00:26:05.421 lat (msec): min=27, max=143, avg=67.89, stdev=18.99 00:26:05.421 clat percentiles (msec): 00:26:05.421 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 53], 00:26:05.421 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 71], 00:26:05.421 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 101], 00:26:05.421 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:26:05.421 | 99.99th=[ 144] 00:26:05.421 bw ( KiB/s): min= 728, max= 1080, per=3.72%, avg=923.79, stdev=101.59, samples=19 00:26:05.421 iops : min= 182, max= 270, avg=230.95, stdev=25.40, samples=19 00:26:05.421 lat (msec) : 50=15.59%, 100=79.14%, 250=5.27% 00:26:05.421 cpu : usr=36.95%, sys=0.52%, ctx=955, majf=0, minf=9 00:26:05.421 IO depths : 1=2.2%, 2=5.2%, 4=15.6%, 8=66.0%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:05.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 issued rwts: total=2354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.421 filename1: (groupid=0, jobs=1): err= 0: pid=102736: Mon Nov 18 15:12:26 2024 00:26:05.421 read: IOPS=246, BW=986KiB/s (1010kB/s)(9872KiB/10009msec) 00:26:05.421 slat (usec): min=3, max=8028, avg=24.11, stdev=242.39 00:26:05.421 clat (msec): min=11, max=143, avg=64.72, stdev=18.82 00:26:05.421 lat (msec): min=11, max=143, avg=64.74, stdev=18.82 00:26:05.421 clat percentiles (msec): 00:26:05.421 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 51], 00:26:05.421 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 66], 00:26:05.421 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 89], 95.00th=[ 100], 00:26:05.421 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:26:05.421 | 99.99th=[ 144] 00:26:05.421 bw ( KiB/s): min= 809, max= 1240, per=3.91%, avg=971.42, stdev=95.99, samples=19 00:26:05.421 iops : min= 202, max= 310, avg=242.84, stdev=24.02, samples=19 00:26:05.421 lat (msec) : 20=0.65%, 50=17.38%, 100=77.51%, 250=4.46% 00:26:05.421 cpu : usr=46.51%, sys=0.75%, ctx=1348, majf=0, minf=9 00:26:05.421 IO depths : 1=2.4%, 2=5.0%, 4=14.0%, 8=67.6%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:05.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 issued rwts: total=2468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.421 filename1: (groupid=0, jobs=1): err= 0: pid=102737: Mon Nov 18 15:12:26 2024 00:26:05.421 read: IOPS=267, BW=1068KiB/s (1094kB/s)(10.4MiB/10007msec) 00:26:05.421 slat (usec): min=3, max=8005, avg=20.38, stdev=233.71 00:26:05.421 clat (msec): min=24, max=140, avg=59.77, stdev=19.32 00:26:05.421 lat (msec): min=24, max=140, avg=59.79, stdev=19.32 00:26:05.421 clat percentiles (msec): 00:26:05.421 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 42], 00:26:05.421 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 62], 00:26:05.421 | 70.00th=[ 69], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 95], 00:26:05.421 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 140], 99.95th=[ 140], 00:26:05.421 | 99.99th=[ 140] 00:26:05.421 bw ( KiB/s): min= 768, max= 1376, per=4.27%, avg=1059.79, stdev=178.95, samples=19 00:26:05.421 iops : min= 192, max= 344, avg=264.95, stdev=44.74, samples=19 00:26:05.421 lat (msec) : 50=33.60%, 100=63.04%, 250=3.37% 00:26:05.421 cpu : usr=41.01%, sys=0.62%, ctx=1638, majf=0, minf=9 00:26:05.421 IO depths : 1=2.2%, 2=4.7%, 4=13.4%, 8=68.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:05.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 issued rwts: total=2673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.421 filename1: (groupid=0, jobs=1): err= 0: pid=102738: Mon Nov 18 15:12:26 2024 00:26:05.421 read: IOPS=295, BW=1182KiB/s (1210kB/s)(11.6MiB/10027msec) 00:26:05.421 slat (usec): min=6, max=4018, avg=14.87, stdev=96.52 00:26:05.421 clat (msec): min=2, max=130, avg=54.00, stdev=18.38 00:26:05.421 lat (msec): min=2, max=130, avg=54.01, stdev=18.38 00:26:05.421 clat percentiles (msec): 00:26:05.421 | 1.00th=[ 6], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:26:05.421 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 58], 00:26:05.421 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 78], 95.00th=[ 87], 00:26:05.421 | 99.00th=[ 109], 99.50th=[ 113], 99.90th=[ 131], 99.95th=[ 131], 00:26:05.421 | 99.99th=[ 131] 00:26:05.421 bw ( KiB/s): min= 896, max= 1880, per=4.76%, avg=1181.10, stdev=223.80, samples=20 00:26:05.421 iops : min= 224, max= 470, avg=295.25, stdev=55.98, samples=20 00:26:05.421 lat (msec) : 4=0.54%, 10=2.16%, 20=0.54%, 50=38.61%, 100=56.70% 00:26:05.421 lat (msec) : 250=1.45% 00:26:05.421 cpu : usr=44.26%, sys=0.70%, ctx=1545, majf=0, minf=9 00:26:05.421 IO depths : 1=1.8%, 2=4.0%, 4=12.3%, 8=70.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:05.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 complete : 0=0.0%, 4=90.8%, 8=4.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 issued rwts: total=2963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.421 filename1: (groupid=0, jobs=1): err= 0: pid=102739: Mon Nov 18 15:12:26 2024 00:26:05.421 read: IOPS=279, BW=1117KiB/s (1143kB/s)(10.9MiB/10005msec) 00:26:05.421 slat (usec): min=3, max=8046, avg=40.71, stdev=460.72 00:26:05.421 clat (msec): min=20, max=130, avg=57.15, stdev=17.72 00:26:05.421 lat (msec): min=20, max=130, avg=57.19, stdev=17.73 00:26:05.421 clat percentiles (msec): 00:26:05.421 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 41], 00:26:05.421 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 60], 00:26:05.421 | 70.00th=[ 66], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 90], 00:26:05.421 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 131], 99.95th=[ 131], 00:26:05.421 | 99.99th=[ 131] 00:26:05.421 bw ( KiB/s): min= 880, max= 1424, per=4.52%, avg=1122.11, stdev=137.43, samples=19 00:26:05.421 iops : min= 220, max= 356, avg=280.53, stdev=34.36, samples=19 00:26:05.421 lat (msec) : 50=42.07%, 100=55.60%, 250=2.33% 00:26:05.421 cpu : usr=34.00%, sys=0.56%, ctx=939, majf=0, minf=9 00:26:05.421 IO depths : 1=0.9%, 2=2.1%, 4=8.5%, 8=75.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:05.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 complete : 0=0.0%, 4=89.9%, 8=6.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 issued rwts: total=2793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.421 filename2: (groupid=0, jobs=1): err= 0: pid=102740: Mon Nov 18 15:12:26 2024 00:26:05.421 read: IOPS=259, BW=1040KiB/s (1065kB/s)(10.2MiB/10014msec) 00:26:05.421 slat (usec): min=4, max=8025, avg=20.04, stdev=229.84 00:26:05.421 clat (msec): min=22, max=135, avg=61.43, stdev=19.23 00:26:05.421 lat (msec): min=22, max=135, avg=61.45, stdev=19.23 00:26:05.421 clat percentiles (msec): 00:26:05.421 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 46], 00:26:05.421 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 61], 00:26:05.421 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:26:05.421 | 99.00th=[ 117], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 136], 00:26:05.421 | 99.99th=[ 136] 00:26:05.421 bw ( KiB/s): min= 728, max= 1344, per=4.15%, avg=1029.89, stdev=153.32, samples=19 00:26:05.421 iops : min= 182, max= 336, avg=257.47, stdev=38.33, samples=19 00:26:05.421 lat (msec) : 50=33.08%, 100=63.08%, 250=3.84% 00:26:05.421 cpu : usr=32.68%, sys=0.39%, ctx=890, majf=0, minf=9 00:26:05.421 IO depths : 1=0.8%, 2=2.0%, 4=8.8%, 8=75.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:05.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.421 issued rwts: total=2603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.421 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.422 filename2: (groupid=0, jobs=1): err= 0: pid=102741: Mon Nov 18 15:12:26 2024 00:26:05.422 read: IOPS=276, BW=1106KiB/s (1132kB/s)(10.8MiB/10035msec) 00:26:05.422 slat (nsec): min=4587, max=88596, avg=12133.04, stdev=7613.09 00:26:05.422 clat (msec): min=15, max=120, avg=57.72, stdev=19.16 00:26:05.422 lat (msec): min=15, max=120, avg=57.74, stdev=19.16 00:26:05.422 clat percentiles (msec): 00:26:05.422 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:26:05.422 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 61], 00:26:05.422 | 70.00th=[ 65], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 92], 00:26:05.422 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:26:05.422 | 99.99th=[ 122] 00:26:05.422 bw ( KiB/s): min= 761, max= 1504, per=4.44%, avg=1102.30, stdev=190.83, samples=20 00:26:05.422 iops : min= 190, max= 376, avg=275.55, stdev=47.72, samples=20 00:26:05.422 lat (msec) : 20=0.58%, 50=39.15%, 100=57.10%, 250=3.17% 00:26:05.422 cpu : usr=44.43%, sys=0.60%, ctx=1248, majf=0, minf=9 00:26:05.422 IO depths : 1=0.7%, 2=1.6%, 4=7.6%, 8=76.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:05.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 complete : 0=0.0%, 4=89.8%, 8=6.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 issued rwts: total=2774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.422 filename2: (groupid=0, jobs=1): err= 0: pid=102742: Mon Nov 18 15:12:26 2024 00:26:05.422 read: IOPS=258, BW=1036KiB/s (1061kB/s)(10.1MiB/10030msec) 00:26:05.422 slat (usec): min=5, max=8046, avg=19.43, stdev=230.53 00:26:05.422 clat (msec): min=12, max=123, avg=61.58, stdev=18.65 00:26:05.422 lat (msec): min=12, max=123, avg=61.60, stdev=18.66 00:26:05.422 clat percentiles (msec): 00:26:05.422 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:26:05.422 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 61], 00:26:05.422 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 96], 00:26:05.422 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 124], 99.95th=[ 124], 00:26:05.422 | 99.99th=[ 124] 00:26:05.422 bw ( KiB/s): min= 816, max= 1296, per=4.17%, avg=1034.70, stdev=126.64, samples=20 00:26:05.422 iops : min= 204, max= 324, avg=258.65, stdev=31.67, samples=20 00:26:05.422 lat (msec) : 20=0.62%, 50=31.07%, 100=65.23%, 250=3.08% 00:26:05.422 cpu : usr=32.84%, sys=0.46%, ctx=866, majf=0, minf=9 00:26:05.422 IO depths : 1=0.7%, 2=2.1%, 4=9.7%, 8=74.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:05.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.422 filename2: (groupid=0, jobs=1): err= 0: pid=102743: Mon Nov 18 15:12:26 2024 00:26:05.422 read: IOPS=247, BW=989KiB/s (1013kB/s)(9920KiB/10030msec) 00:26:05.422 slat (usec): min=3, max=8046, avg=19.48, stdev=227.80 00:26:05.422 clat (msec): min=25, max=150, avg=64.55, stdev=18.89 00:26:05.422 lat (msec): min=25, max=150, avg=64.56, stdev=18.89 00:26:05.422 clat percentiles (msec): 00:26:05.422 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 50], 00:26:05.422 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 64], 00:26:05.422 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 100], 00:26:05.422 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 150], 99.95th=[ 150], 00:26:05.422 | 99.99th=[ 150] 00:26:05.422 bw ( KiB/s): min= 768, max= 1152, per=3.93%, avg=976.84, stdev=107.68, samples=19 00:26:05.422 iops : min= 192, max= 288, avg=244.16, stdev=26.90, samples=19 00:26:05.422 lat (msec) : 50=20.60%, 100=74.64%, 250=4.76% 00:26:05.422 cpu : usr=37.96%, sys=0.58%, ctx=1045, majf=0, minf=9 00:26:05.422 IO depths : 1=2.6%, 2=6.0%, 4=15.8%, 8=64.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:05.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 complete : 0=0.0%, 4=91.7%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.422 filename2: (groupid=0, jobs=1): err= 0: pid=102744: Mon Nov 18 15:12:26 2024 00:26:05.422 read: IOPS=298, BW=1193KiB/s (1221kB/s)(11.7MiB/10028msec) 00:26:05.422 slat (usec): min=3, max=8017, avg=19.02, stdev=220.51 00:26:05.422 clat (msec): min=4, max=146, avg=53.51, stdev=19.39 00:26:05.422 lat (msec): min=4, max=146, avg=53.53, stdev=19.40 00:26:05.422 clat percentiles (msec): 00:26:05.422 | 1.00th=[ 6], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 39], 00:26:05.422 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 52], 60.00th=[ 57], 00:26:05.422 | 70.00th=[ 61], 80.00th=[ 67], 90.00th=[ 81], 95.00th=[ 88], 00:26:05.422 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 146], 99.95th=[ 146], 00:26:05.422 | 99.99th=[ 146] 00:26:05.422 bw ( KiB/s): min= 870, max= 1760, per=4.80%, avg=1191.45, stdev=209.88, samples=20 00:26:05.422 iops : min= 217, max= 440, avg=297.80, stdev=52.53, samples=20 00:26:05.422 lat (msec) : 10=1.94%, 50=46.49%, 100=49.16%, 250=2.41% 00:26:05.422 cpu : usr=42.59%, sys=0.60%, ctx=1211, majf=0, minf=9 00:26:05.422 IO depths : 1=0.6%, 2=1.5%, 4=9.5%, 8=75.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:05.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 complete : 0=0.0%, 4=89.7%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 issued rwts: total=2990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.422 filename2: (groupid=0, jobs=1): err= 0: pid=102745: Mon Nov 18 15:12:26 2024 00:26:05.422 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.82MiB/10028msec) 00:26:05.422 slat (usec): min=4, max=11037, avg=25.24, stdev=321.07 00:26:05.422 clat (msec): min=28, max=155, avg=63.69, stdev=19.56 00:26:05.422 lat (msec): min=28, max=155, avg=63.71, stdev=19.57 00:26:05.422 clat percentiles (msec): 00:26:05.422 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:26:05.422 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 67], 00:26:05.422 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 99], 00:26:05.422 | 99.00th=[ 127], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:26:05.422 | 99.99th=[ 157] 00:26:05.422 bw ( KiB/s): min= 720, max= 1248, per=4.02%, avg=998.84, stdev=143.56, samples=19 00:26:05.422 iops : min= 180, max= 312, avg=249.68, stdev=35.88, samples=19 00:26:05.422 lat (msec) : 50=27.22%, 100=67.85%, 250=4.93% 00:26:05.422 cpu : usr=34.23%, sys=0.52%, ctx=957, majf=0, minf=9 00:26:05.422 IO depths : 1=1.3%, 2=2.9%, 4=10.4%, 8=73.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:05.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 issued rwts: total=2513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.422 filename2: (groupid=0, jobs=1): err= 0: pid=102746: Mon Nov 18 15:12:26 2024 00:26:05.422 read: IOPS=297, BW=1188KiB/s (1217kB/s)(11.6MiB/10026msec) 00:26:05.422 slat (usec): min=4, max=4298, avg=13.01, stdev=78.91 00:26:05.422 clat (msec): min=4, max=143, avg=53.70, stdev=19.05 00:26:05.422 lat (msec): min=4, max=143, avg=53.71, stdev=19.05 00:26:05.422 clat percentiles (msec): 00:26:05.422 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 39], 00:26:05.422 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 57], 00:26:05.422 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 77], 95.00th=[ 91], 00:26:05.422 | 99.00th=[ 115], 99.50th=[ 117], 99.90th=[ 131], 99.95th=[ 131], 00:26:05.422 | 99.99th=[ 144] 00:26:05.422 bw ( KiB/s): min= 814, max= 1840, per=4.79%, avg=1188.30, stdev=233.20, samples=20 00:26:05.422 iops : min= 203, max= 460, avg=297.05, stdev=58.34, samples=20 00:26:05.422 lat (msec) : 10=1.54%, 20=0.60%, 50=45.23%, 100=49.63%, 250=2.99% 00:26:05.422 cpu : usr=42.70%, sys=0.59%, ctx=1240, majf=0, minf=9 00:26:05.422 IO depths : 1=0.5%, 2=1.0%, 4=7.4%, 8=77.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:05.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 complete : 0=0.0%, 4=89.3%, 8=6.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 issued rwts: total=2978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.422 filename2: (groupid=0, jobs=1): err= 0: pid=102747: Mon Nov 18 15:12:26 2024 00:26:05.422 read: IOPS=283, BW=1135KiB/s (1162kB/s)(11.1MiB/10019msec) 00:26:05.422 slat (usec): min=3, max=4027, avg=17.84, stdev=149.81 00:26:05.422 clat (msec): min=22, max=117, avg=56.28, stdev=16.53 00:26:05.422 lat (msec): min=22, max=117, avg=56.29, stdev=16.53 00:26:05.422 clat percentiles (msec): 00:26:05.422 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 41], 00:26:05.422 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 00:26:05.422 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 80], 95.00th=[ 89], 00:26:05.422 | 99.00th=[ 102], 99.50th=[ 112], 99.90th=[ 118], 99.95th=[ 118], 00:26:05.422 | 99.99th=[ 118] 00:26:05.422 bw ( KiB/s): min= 896, max= 1536, per=4.55%, avg=1130.90, stdev=154.87, samples=20 00:26:05.422 iops : min= 224, max= 384, avg=282.70, stdev=38.70, samples=20 00:26:05.422 lat (msec) : 50=39.61%, 100=59.13%, 250=1.27% 00:26:05.422 cpu : usr=43.44%, sys=0.64%, ctx=1213, majf=0, minf=9 00:26:05.422 IO depths : 1=1.5%, 2=3.3%, 4=11.3%, 8=71.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:05.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.422 issued rwts: total=2843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.422 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:05.422 00:26:05.422 Run status group 0 (all jobs): 00:26:05.422 READ: bw=24.2MiB/s (25.4MB/s), 917KiB/s-1204KiB/s (939kB/s-1233kB/s), io=243MiB (255MB), run=10004-10035msec 00:26:05.422 15:12:27 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:05.422 15:12:27 -- target/dif.sh@43 -- # local sub 00:26:05.422 15:12:27 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.422 15:12:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.422 15:12:27 -- target/dif.sh@36 -- # local sub_id=0 00:26:05.422 15:12:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.422 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.422 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.423 15:12:27 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:05.423 15:12:27 -- target/dif.sh@36 -- # local sub_id=1 00:26:05.423 15:12:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.423 15:12:27 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:05.423 15:12:27 -- target/dif.sh@36 -- # local sub_id=2 00:26:05.423 15:12:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:05.423 15:12:27 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:05.423 15:12:27 -- target/dif.sh@115 -- # numjobs=2 00:26:05.423 15:12:27 -- target/dif.sh@115 -- # iodepth=8 00:26:05.423 15:12:27 -- target/dif.sh@115 -- # runtime=5 00:26:05.423 15:12:27 -- target/dif.sh@115 -- # files=1 00:26:05.423 15:12:27 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:05.423 15:12:27 -- target/dif.sh@28 -- # local sub 00:26:05.423 15:12:27 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.423 15:12:27 -- target/dif.sh@31 -- # create_subsystem 0 00:26:05.423 15:12:27 -- target/dif.sh@18 -- # local sub_id=0 00:26:05.423 15:12:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 bdev_null0 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 [2024-11-18 15:12:27.203765] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.423 15:12:27 -- target/dif.sh@31 -- # create_subsystem 1 00:26:05.423 15:12:27 -- target/dif.sh@18 -- # local sub_id=1 00:26:05.423 15:12:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 bdev_null1 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.423 15:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.423 15:12:27 -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 15:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.423 15:12:27 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:05.423 15:12:27 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:05.423 15:12:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:05.423 15:12:27 -- nvmf/common.sh@520 -- # config=() 00:26:05.423 15:12:27 -- nvmf/common.sh@520 -- # local subsystem config 00:26:05.423 15:12:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:05.423 15:12:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:05.423 { 00:26:05.423 "params": { 00:26:05.423 "name": "Nvme$subsystem", 00:26:05.423 "trtype": "$TEST_TRANSPORT", 00:26:05.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.423 "adrfam": "ipv4", 00:26:05.423 "trsvcid": "$NVMF_PORT", 00:26:05.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.423 "hdgst": ${hdgst:-false}, 00:26:05.423 "ddgst": ${ddgst:-false} 00:26:05.423 }, 00:26:05.423 "method": "bdev_nvme_attach_controller" 00:26:05.423 } 00:26:05.423 EOF 00:26:05.423 )") 00:26:05.423 15:12:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.423 15:12:27 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.423 15:12:27 -- target/dif.sh@82 -- # gen_fio_conf 00:26:05.423 15:12:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:05.423 15:12:27 -- target/dif.sh@54 -- # local file 00:26:05.423 15:12:27 -- target/dif.sh@56 -- # cat 00:26:05.423 15:12:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:05.423 15:12:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:05.423 15:12:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.423 15:12:27 -- nvmf/common.sh@542 -- # cat 00:26:05.423 15:12:27 -- common/autotest_common.sh@1330 -- # shift 00:26:05.423 15:12:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:05.423 15:12:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.423 15:12:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.423 15:12:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:05.423 15:12:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:05.423 15:12:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:05.423 15:12:27 -- target/dif.sh@72 -- # (( file <= files )) 00:26:05.423 15:12:27 -- target/dif.sh@73 -- # cat 00:26:05.423 15:12:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:05.423 15:12:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:05.423 { 00:26:05.423 "params": { 00:26:05.423 "name": "Nvme$subsystem", 00:26:05.423 "trtype": "$TEST_TRANSPORT", 00:26:05.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.423 "adrfam": "ipv4", 00:26:05.423 "trsvcid": "$NVMF_PORT", 00:26:05.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.423 "hdgst": ${hdgst:-false}, 00:26:05.423 "ddgst": ${ddgst:-false} 00:26:05.423 }, 00:26:05.423 "method": "bdev_nvme_attach_controller" 00:26:05.423 } 00:26:05.423 EOF 00:26:05.423 )") 00:26:05.423 15:12:27 -- nvmf/common.sh@542 -- # cat 00:26:05.423 15:12:27 -- target/dif.sh@72 -- # (( file++ )) 00:26:05.423 15:12:27 -- target/dif.sh@72 -- # (( file <= files )) 00:26:05.423 15:12:27 -- nvmf/common.sh@544 -- # jq . 00:26:05.423 15:12:27 -- nvmf/common.sh@545 -- # IFS=, 00:26:05.423 15:12:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:05.423 "params": { 00:26:05.423 "name": "Nvme0", 00:26:05.423 "trtype": "tcp", 00:26:05.423 "traddr": "10.0.0.2", 00:26:05.423 "adrfam": "ipv4", 00:26:05.423 "trsvcid": "4420", 00:26:05.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:05.423 "hdgst": false, 00:26:05.423 "ddgst": false 00:26:05.423 }, 00:26:05.423 "method": "bdev_nvme_attach_controller" 00:26:05.423 },{ 00:26:05.423 "params": { 00:26:05.423 "name": "Nvme1", 00:26:05.423 "trtype": "tcp", 00:26:05.423 "traddr": "10.0.0.2", 00:26:05.423 "adrfam": "ipv4", 00:26:05.423 "trsvcid": "4420", 00:26:05.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:05.423 "hdgst": false, 00:26:05.423 "ddgst": false 00:26:05.423 }, 00:26:05.423 "method": "bdev_nvme_attach_controller" 00:26:05.423 }' 00:26:05.423 15:12:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:05.423 15:12:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:05.423 15:12:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.423 15:12:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.423 15:12:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:05.423 15:12:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:05.423 15:12:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:05.423 15:12:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:05.424 15:12:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:05.424 15:12:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.424 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:05.424 ... 00:26:05.424 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:05.424 ... 00:26:05.424 fio-3.35 00:26:05.424 Starting 4 threads 00:26:05.424 [2024-11-18 15:12:27.975902] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:05.424 [2024-11-18 15:12:27.975968] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:09.614 00:26:09.614 filename0: (groupid=0, jobs=1): err= 0: pid=102873: Mon Nov 18 15:12:33 2024 00:26:09.614 read: IOPS=2176, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5001msec) 00:26:09.614 slat (usec): min=6, max=100, avg=16.13, stdev= 8.64 00:26:09.614 clat (usec): min=890, max=10307, avg=3596.89, stdev=425.19 00:26:09.614 lat (usec): min=896, max=10328, avg=3613.02, stdev=425.49 00:26:09.614 clat percentiles (usec): 00:26:09.614 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:26:09.614 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:26:09.614 | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3785], 95.00th=[ 4080], 00:26:09.614 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 6783], 99.95th=[10159], 00:26:09.614 | 99.99th=[10290] 00:26:09.614 bw ( KiB/s): min=14848, max=18048, per=24.97%, avg=17402.67, stdev=982.70, samples=9 00:26:09.614 iops : min= 1856, max= 2256, avg=2175.33, stdev=122.84, samples=9 00:26:09.614 lat (usec) : 1000=0.05% 00:26:09.614 lat (msec) : 2=0.13%, 4=94.07%, 10=5.69%, 20=0.06% 00:26:09.614 cpu : usr=95.94%, sys=2.94%, ctx=6, majf=0, minf=0 00:26:09.614 IO depths : 1=8.3%, 2=24.0%, 4=50.9%, 8=16.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.614 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.614 issued rwts: total=10886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.614 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:09.614 filename0: (groupid=0, jobs=1): err= 0: pid=102874: Mon Nov 18 15:12:33 2024 00:26:09.614 read: IOPS=2183, BW=17.1MiB/s (17.9MB/s)(85.3MiB/5002msec) 00:26:09.614 slat (nsec): min=5755, max=76248, avg=8714.11, stdev=5706.41 00:26:09.614 clat (usec): min=1041, max=8551, avg=3622.22, stdev=423.42 00:26:09.614 lat (usec): min=1048, max=8571, avg=3630.93, stdev=423.83 00:26:09.614 clat percentiles (usec): 00:26:09.614 | 1.00th=[ 2737], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3458], 00:26:09.614 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:09.614 | 70.00th=[ 3621], 80.00th=[ 3687], 90.00th=[ 3851], 95.00th=[ 4228], 00:26:09.614 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 7570], 99.95th=[ 8455], 00:26:09.614 | 99.99th=[ 8586] 00:26:09.614 bw ( KiB/s): min=15024, max=18048, per=25.04%, avg=17456.00, stdev=930.93, samples=9 00:26:09.614 iops : min= 1878, max= 2256, avg=2182.00, stdev=116.37, samples=9 00:26:09.614 lat (msec) : 2=0.28%, 4=93.66%, 10=6.05% 00:26:09.614 cpu : usr=95.62%, sys=3.18%, ctx=7, majf=0, minf=9 00:26:09.614 IO depths : 1=7.2%, 2=22.1%, 4=52.8%, 8=17.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.614 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.614 issued rwts: total=10923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.614 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:09.614 filename1: (groupid=0, jobs=1): err= 0: pid=102875: Mon Nov 18 15:12:33 2024 00:26:09.614 read: IOPS=2177, BW=17.0MiB/s (17.8MB/s)(85.1MiB/5001msec) 00:26:09.614 slat (usec): min=6, max=100, avg=15.99, stdev= 8.75 00:26:09.614 clat (usec): min=671, max=11540, avg=3596.60, stdev=469.24 00:26:09.614 lat (usec): min=682, max=11568, avg=3612.59, stdev=469.41 00:26:09.614 clat percentiles (usec): 00:26:09.614 | 1.00th=[ 2802], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:26:09.614 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:26:09.614 | 70.00th=[ 3589], 80.00th=[ 3654], 90.00th=[ 3785], 95.00th=[ 4080], 00:26:09.614 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 8586], 99.95th=[10290], 00:26:09.614 | 99.99th=[10290] 00:26:09.614 bw ( KiB/s): min=14848, max=18048, per=24.96%, avg=17397.67, stdev=982.58, samples=9 00:26:09.614 iops : min= 1856, max= 2256, avg=2174.67, stdev=122.81, samples=9 00:26:09.614 lat (usec) : 750=0.01% 00:26:09.614 lat (msec) : 2=0.15%, 4=94.16%, 10=5.61%, 20=0.07% 00:26:09.614 cpu : usr=95.86%, sys=3.02%, ctx=20, majf=0, minf=0 00:26:09.614 IO depths : 1=7.6%, 2=25.0%, 4=50.0%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.614 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.614 issued rwts: total=10888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.614 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:09.614 filename1: (groupid=0, jobs=1): err= 0: pid=102876: Mon Nov 18 15:12:33 2024 00:26:09.614 read: IOPS=2176, BW=17.0MiB/s (17.8MB/s)(85.1MiB/5003msec) 00:26:09.614 slat (nsec): min=5772, max=96092, avg=11230.29, stdev=7989.62 00:26:09.614 clat (usec): min=1176, max=10263, avg=3629.46, stdev=452.46 00:26:09.614 lat (usec): min=1183, max=10275, avg=3640.69, stdev=452.08 00:26:09.614 clat percentiles (usec): 00:26:09.614 | 1.00th=[ 2900], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3458], 00:26:09.614 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:09.614 | 70.00th=[ 3621], 80.00th=[ 3687], 90.00th=[ 3851], 95.00th=[ 4228], 00:26:09.614 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 6849], 99.95th=[10290], 00:26:09.614 | 99.99th=[10290] 00:26:09.614 bw ( KiB/s): min=14848, max=17968, per=24.96%, avg=17397.33, stdev=975.48, samples=9 00:26:09.614 iops : min= 1856, max= 2246, avg=2174.67, stdev=121.93, samples=9 00:26:09.614 lat (msec) : 2=0.18%, 4=93.19%, 10=6.57%, 20=0.06% 00:26:09.614 cpu : usr=95.24%, sys=3.52%, ctx=8, majf=0, minf=0 00:26:09.614 IO depths : 1=4.5%, 2=16.0%, 4=59.0%, 8=20.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.614 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.614 issued rwts: total=10891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.614 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:09.614 00:26:09.614 Run status group 0 (all jobs): 00:26:09.614 READ: bw=68.1MiB/s (71.4MB/s), 17.0MiB/s-17.1MiB/s (17.8MB/s-17.9MB/s), io=341MiB (357MB), run=5001-5003msec 00:26:09.873 15:12:33 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:09.873 15:12:33 -- target/dif.sh@43 -- # local sub 00:26:09.873 15:12:33 -- target/dif.sh@45 -- # for sub in "$@" 00:26:09.873 15:12:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:09.873 15:12:33 -- target/dif.sh@36 -- # local sub_id=0 00:26:09.873 15:12:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:09.873 15:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.873 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.873 15:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.873 15:12:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:09.873 15:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.873 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.873 15:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.873 15:12:33 -- target/dif.sh@45 -- # for sub in "$@" 00:26:09.873 15:12:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:09.873 15:12:33 -- target/dif.sh@36 -- # local sub_id=1 00:26:09.873 15:12:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:09.873 15:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.873 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.873 15:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.873 15:12:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:09.873 15:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.873 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.873 15:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.873 00:26:09.873 real 0m23.707s 00:26:09.873 user 2m8.084s 00:26:09.873 sys 0m3.558s 00:26:09.873 15:12:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:09.873 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.873 ************************************ 00:26:09.873 END TEST fio_dif_rand_params 00:26:09.873 ************************************ 00:26:09.873 15:12:33 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:09.873 15:12:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:09.873 15:12:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:09.873 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.873 ************************************ 00:26:09.873 START TEST fio_dif_digest 00:26:09.874 ************************************ 00:26:09.874 15:12:33 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:09.874 15:12:33 -- target/dif.sh@123 -- # local NULL_DIF 00:26:09.874 15:12:33 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:09.874 15:12:33 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:09.874 15:12:33 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:09.874 15:12:33 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:09.874 15:12:33 -- target/dif.sh@127 -- # numjobs=3 00:26:09.874 15:12:33 -- target/dif.sh@127 -- # iodepth=3 00:26:09.874 15:12:33 -- target/dif.sh@127 -- # runtime=10 00:26:09.874 15:12:33 -- target/dif.sh@128 -- # hdgst=true 00:26:09.874 15:12:33 -- target/dif.sh@128 -- # ddgst=true 00:26:09.874 15:12:33 -- target/dif.sh@130 -- # create_subsystems 0 00:26:09.874 15:12:33 -- target/dif.sh@28 -- # local sub 00:26:09.874 15:12:33 -- target/dif.sh@30 -- # for sub in "$@" 00:26:09.874 15:12:33 -- target/dif.sh@31 -- # create_subsystem 0 00:26:09.874 15:12:33 -- target/dif.sh@18 -- # local sub_id=0 00:26:09.874 15:12:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:09.874 15:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.874 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.874 bdev_null0 00:26:09.874 15:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.874 15:12:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:09.874 15:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.874 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.874 15:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.874 15:12:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:09.874 15:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.874 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.874 15:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.874 15:12:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:09.874 15:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.874 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:26:09.874 [2024-11-18 15:12:33.438291] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.874 15:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.874 15:12:33 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:09.874 15:12:33 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:09.874 15:12:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:09.874 15:12:33 -- nvmf/common.sh@520 -- # config=() 00:26:09.874 15:12:33 -- nvmf/common.sh@520 -- # local subsystem config 00:26:09.874 15:12:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.874 15:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:09.874 15:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:09.874 { 00:26:09.874 "params": { 00:26:09.874 "name": "Nvme$subsystem", 00:26:09.874 "trtype": "$TEST_TRANSPORT", 00:26:09.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.874 "adrfam": "ipv4", 00:26:09.874 "trsvcid": "$NVMF_PORT", 00:26:09.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.874 "hdgst": ${hdgst:-false}, 00:26:09.874 "ddgst": ${ddgst:-false} 00:26:09.874 }, 00:26:09.874 "method": "bdev_nvme_attach_controller" 00:26:09.874 } 00:26:09.874 EOF 00:26:09.874 )") 00:26:09.874 15:12:33 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.874 15:12:33 -- target/dif.sh@82 -- # gen_fio_conf 00:26:09.874 15:12:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:09.874 15:12:33 -- target/dif.sh@54 -- # local file 00:26:09.874 15:12:33 -- target/dif.sh@56 -- # cat 00:26:09.874 15:12:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:09.874 15:12:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:09.874 15:12:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.874 15:12:33 -- common/autotest_common.sh@1330 -- # shift 00:26:09.874 15:12:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:09.874 15:12:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.874 15:12:33 -- nvmf/common.sh@542 -- # cat 00:26:09.874 15:12:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.874 15:12:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:09.874 15:12:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:09.874 15:12:33 -- target/dif.sh@72 -- # (( file <= files )) 00:26:09.874 15:12:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:09.874 15:12:33 -- nvmf/common.sh@544 -- # jq . 00:26:09.874 15:12:33 -- nvmf/common.sh@545 -- # IFS=, 00:26:09.874 15:12:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:09.874 "params": { 00:26:09.874 "name": "Nvme0", 00:26:09.874 "trtype": "tcp", 00:26:09.874 "traddr": "10.0.0.2", 00:26:09.874 "adrfam": "ipv4", 00:26:09.874 "trsvcid": "4420", 00:26:09.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:09.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:09.874 "hdgst": true, 00:26:09.874 "ddgst": true 00:26:09.874 }, 00:26:09.874 "method": "bdev_nvme_attach_controller" 00:26:09.874 }' 00:26:10.133 15:12:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:10.133 15:12:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:10.133 15:12:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.133 15:12:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:10.133 15:12:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:10.133 15:12:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:10.133 15:12:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:10.133 15:12:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:10.133 15:12:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:10.133 15:12:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.133 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:10.133 ... 00:26:10.133 fio-3.35 00:26:10.133 Starting 3 threads 00:26:10.700 [2024-11-18 15:12:34.067986] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:10.700 [2024-11-18 15:12:34.068067] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:20.673 00:26:20.673 filename0: (groupid=0, jobs=1): err= 0: pid=102982: Mon Nov 18 15:12:44 2024 00:26:20.673 read: IOPS=223, BW=28.0MiB/s (29.4MB/s)(280MiB/10002msec) 00:26:20.673 slat (nsec): min=5206, max=65544, avg=15612.28, stdev=6031.84 00:26:20.673 clat (usec): min=4052, max=25756, avg=13377.35, stdev=2745.59 00:26:20.673 lat (usec): min=4072, max=25778, avg=13392.96, stdev=2745.18 00:26:20.673 clat percentiles (usec): 00:26:20.673 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10421], 00:26:20.673 | 30.00th=[13173], 40.00th=[13698], 50.00th=[13960], 60.00th=[14353], 00:26:20.673 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16319], 00:26:20.673 | 99.00th=[21627], 99.50th=[22152], 99.90th=[24511], 99.95th=[24773], 00:26:20.673 | 99.99th=[25822] 00:26:20.673 bw ( KiB/s): min=23249, max=33536, per=30.12%, avg=28555.95, stdev=2720.42, samples=19 00:26:20.673 iops : min= 181, max= 262, avg=223.00, stdev=21.35, samples=19 00:26:20.673 lat (msec) : 10=18.26%, 20=80.04%, 50=1.70% 00:26:20.673 cpu : usr=94.60%, sys=4.05%, ctx=735, majf=0, minf=9 00:26:20.673 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.673 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.673 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:20.673 filename0: (groupid=0, jobs=1): err= 0: pid=102983: Mon Nov 18 15:12:44 2024 00:26:20.673 read: IOPS=270, BW=33.8MiB/s (35.5MB/s)(339MiB/10007msec) 00:26:20.673 slat (nsec): min=5554, max=69212, avg=14699.64, stdev=6216.95 00:26:20.673 clat (usec): min=5926, max=52090, avg=11066.30, stdev=2524.58 00:26:20.673 lat (usec): min=5936, max=52100, avg=11081.00, stdev=2524.33 00:26:20.673 clat percentiles (usec): 00:26:20.673 | 1.00th=[ 6652], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 9110], 00:26:20.673 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:26:20.673 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13042], 95.00th=[13566], 00:26:20.673 | 99.00th=[17695], 99.50th=[18482], 99.90th=[50070], 99.95th=[52167], 00:26:20.673 | 99.99th=[52167] 00:26:20.673 bw ( KiB/s): min=27648, max=40960, per=36.38%, avg=34489.26, stdev=3080.65, samples=19 00:26:20.673 iops : min= 216, max= 320, avg=269.42, stdev=24.09, samples=19 00:26:20.673 lat (msec) : 10=23.41%, 20=76.40%, 50=0.07%, 100=0.11% 00:26:20.673 cpu : usr=93.88%, sys=4.53%, ctx=60, majf=0, minf=9 00:26:20.673 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.673 issued rwts: total=2708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.673 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:20.673 filename0: (groupid=0, jobs=1): err= 0: pid=102984: Mon Nov 18 15:12:44 2024 00:26:20.673 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(308MiB/10006msec) 00:26:20.673 slat (nsec): min=6149, max=85879, avg=13065.52, stdev=5647.69 00:26:20.673 clat (usec): min=7021, max=98846, avg=12169.61, stdev=9031.71 00:26:20.673 lat (usec): min=7030, max=98857, avg=12182.67, stdev=9031.78 00:26:20.673 clat percentiles (usec): 00:26:20.673 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:26:20.673 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:26:20.673 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11600], 95.00th=[16712], 00:26:20.673 | 99.00th=[52167], 99.50th=[52691], 99.90th=[93848], 99.95th=[94897], 00:26:20.673 | 99.99th=[99091] 00:26:20.673 bw ( KiB/s): min=15872, max=39424, per=33.68%, avg=31931.47, stdev=6492.15, samples=19 00:26:20.673 iops : min= 124, max= 308, avg=249.42, stdev=50.69, samples=19 00:26:20.673 lat (msec) : 10=40.24%, 20=55.18%, 50=0.69%, 100=3.90% 00:26:20.673 cpu : usr=94.00%, sys=4.61%, ctx=16, majf=0, minf=9 00:26:20.673 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.673 issued rwts: total=2463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.673 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:20.673 00:26:20.673 Run status group 0 (all jobs): 00:26:20.673 READ: bw=92.6MiB/s (97.1MB/s), 28.0MiB/s-33.8MiB/s (29.4MB/s-35.5MB/s), io=926MiB (971MB), run=10002-10007msec 00:26:20.932 15:12:44 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:20.932 15:12:44 -- target/dif.sh@43 -- # local sub 00:26:20.932 15:12:44 -- target/dif.sh@45 -- # for sub in "$@" 00:26:20.932 15:12:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:20.932 15:12:44 -- target/dif.sh@36 -- # local sub_id=0 00:26:20.932 15:12:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:20.932 15:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.932 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:26:20.932 15:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.932 15:12:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:20.932 15:12:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.932 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:26:20.932 15:12:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.932 00:26:20.932 real 0m11.001s 00:26:20.932 user 0m28.856s 00:26:20.932 sys 0m1.627s 00:26:20.932 15:12:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:20.932 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:26:20.932 ************************************ 00:26:20.932 END TEST fio_dif_digest 00:26:20.932 ************************************ 00:26:20.932 15:12:44 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:20.932 15:12:44 -- target/dif.sh@147 -- # nvmftestfini 00:26:20.932 15:12:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:20.932 15:12:44 -- nvmf/common.sh@116 -- # sync 00:26:20.932 15:12:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:20.932 15:12:44 -- nvmf/common.sh@119 -- # set +e 00:26:20.932 15:12:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:20.932 15:12:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:20.932 rmmod nvme_tcp 00:26:20.932 rmmod nvme_fabrics 00:26:21.190 rmmod nvme_keyring 00:26:21.190 15:12:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:21.190 15:12:44 -- nvmf/common.sh@123 -- # set -e 00:26:21.190 15:12:44 -- nvmf/common.sh@124 -- # return 0 00:26:21.190 15:12:44 -- nvmf/common.sh@477 -- # '[' -n 102219 ']' 00:26:21.190 15:12:44 -- nvmf/common.sh@478 -- # killprocess 102219 00:26:21.190 15:12:44 -- common/autotest_common.sh@936 -- # '[' -z 102219 ']' 00:26:21.190 15:12:44 -- common/autotest_common.sh@940 -- # kill -0 102219 00:26:21.190 15:12:44 -- common/autotest_common.sh@941 -- # uname 00:26:21.190 15:12:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:21.190 15:12:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102219 00:26:21.190 15:12:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:21.190 15:12:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:21.190 killing process with pid 102219 00:26:21.190 15:12:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102219' 00:26:21.190 15:12:44 -- common/autotest_common.sh@955 -- # kill 102219 00:26:21.190 15:12:44 -- common/autotest_common.sh@960 -- # wait 102219 00:26:21.190 15:12:44 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:21.190 15:12:44 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:21.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:21.758 Waiting for block devices as requested 00:26:21.758 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:21.758 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:22.016 15:12:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:22.016 15:12:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:22.016 15:12:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.016 15:12:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:22.016 15:12:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.016 15:12:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:22.016 15:12:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.016 15:12:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:22.016 00:26:22.016 real 1m0.228s 00:26:22.016 user 3m51.994s 00:26:22.016 sys 0m14.120s 00:26:22.016 15:12:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:22.016 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.016 ************************************ 00:26:22.016 END TEST nvmf_dif 00:26:22.016 ************************************ 00:26:22.016 15:12:45 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:22.016 15:12:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:22.016 15:12:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:22.016 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:26:22.016 ************************************ 00:26:22.016 START TEST nvmf_abort_qd_sizes 00:26:22.016 ************************************ 00:26:22.016 15:12:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:22.016 * Looking for test storage... 00:26:22.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:22.016 15:12:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:22.016 15:12:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:22.016 15:12:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:22.276 15:12:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:22.276 15:12:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:22.276 15:12:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:22.276 15:12:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:22.276 15:12:45 -- scripts/common.sh@335 -- # IFS=.-: 00:26:22.276 15:12:45 -- scripts/common.sh@335 -- # read -ra ver1 00:26:22.276 15:12:45 -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.276 15:12:45 -- scripts/common.sh@336 -- # read -ra ver2 00:26:22.276 15:12:45 -- scripts/common.sh@337 -- # local 'op=<' 00:26:22.276 15:12:45 -- scripts/common.sh@339 -- # ver1_l=2 00:26:22.276 15:12:45 -- scripts/common.sh@340 -- # ver2_l=1 00:26:22.276 15:12:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:22.276 15:12:45 -- scripts/common.sh@343 -- # case "$op" in 00:26:22.276 15:12:45 -- scripts/common.sh@344 -- # : 1 00:26:22.276 15:12:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:22.276 15:12:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.276 15:12:45 -- scripts/common.sh@364 -- # decimal 1 00:26:22.276 15:12:45 -- scripts/common.sh@352 -- # local d=1 00:26:22.276 15:12:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.276 15:12:45 -- scripts/common.sh@354 -- # echo 1 00:26:22.276 15:12:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:22.276 15:12:45 -- scripts/common.sh@365 -- # decimal 2 00:26:22.276 15:12:45 -- scripts/common.sh@352 -- # local d=2 00:26:22.276 15:12:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.276 15:12:45 -- scripts/common.sh@354 -- # echo 2 00:26:22.276 15:12:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:22.276 15:12:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:22.276 15:12:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:22.276 15:12:45 -- scripts/common.sh@367 -- # return 0 00:26:22.276 15:12:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.276 15:12:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.276 --rc genhtml_branch_coverage=1 00:26:22.276 --rc genhtml_function_coverage=1 00:26:22.276 --rc genhtml_legend=1 00:26:22.276 --rc geninfo_all_blocks=1 00:26:22.276 --rc geninfo_unexecuted_blocks=1 00:26:22.276 00:26:22.276 ' 00:26:22.276 15:12:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.276 --rc genhtml_branch_coverage=1 00:26:22.276 --rc genhtml_function_coverage=1 00:26:22.276 --rc genhtml_legend=1 00:26:22.276 --rc geninfo_all_blocks=1 00:26:22.276 --rc geninfo_unexecuted_blocks=1 00:26:22.276 00:26:22.276 ' 00:26:22.276 15:12:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.276 --rc genhtml_branch_coverage=1 00:26:22.276 --rc genhtml_function_coverage=1 00:26:22.276 --rc genhtml_legend=1 00:26:22.276 --rc geninfo_all_blocks=1 00:26:22.276 --rc geninfo_unexecuted_blocks=1 00:26:22.276 00:26:22.276 ' 00:26:22.276 15:12:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:22.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.276 --rc genhtml_branch_coverage=1 00:26:22.276 --rc genhtml_function_coverage=1 00:26:22.276 --rc genhtml_legend=1 00:26:22.276 --rc geninfo_all_blocks=1 00:26:22.276 --rc geninfo_unexecuted_blocks=1 00:26:22.276 00:26:22.276 ' 00:26:22.276 15:12:45 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:22.276 15:12:45 -- nvmf/common.sh@7 -- # uname -s 00:26:22.276 15:12:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.276 15:12:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.276 15:12:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.276 15:12:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.276 15:12:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.276 15:12:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.276 15:12:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.276 15:12:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.276 15:12:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.276 15:12:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.276 15:12:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd 00:26:22.276 15:12:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=1906e403-ffd2-4cdd-80f4-a467486e83fd 00:26:22.276 15:12:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.276 15:12:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.276 15:12:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:22.276 15:12:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:22.276 15:12:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.276 15:12:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.276 15:12:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.276 15:12:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.276 15:12:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.276 15:12:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.276 15:12:45 -- paths/export.sh@5 -- # export PATH 00:26:22.276 15:12:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.276 15:12:45 -- nvmf/common.sh@46 -- # : 0 00:26:22.276 15:12:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:22.276 15:12:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:22.276 15:12:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:22.276 15:12:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.276 15:12:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.276 15:12:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:22.276 15:12:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:22.276 15:12:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:22.276 15:12:45 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:22.276 15:12:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:22.276 15:12:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.276 15:12:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:22.276 15:12:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:22.276 15:12:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:22.276 15:12:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.276 15:12:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:22.276 15:12:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.276 15:12:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:22.276 15:12:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:22.276 15:12:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:22.276 15:12:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:22.276 15:12:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:22.276 15:12:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:22.276 15:12:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.276 15:12:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.276 15:12:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:22.276 15:12:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:22.276 15:12:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:22.276 15:12:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:22.277 15:12:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:22.277 15:12:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.277 15:12:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:22.277 15:12:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:22.277 15:12:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:22.277 15:12:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:22.277 15:12:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:22.277 15:12:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:22.277 Cannot find device "nvmf_tgt_br" 00:26:22.277 15:12:45 -- nvmf/common.sh@154 -- # true 00:26:22.277 15:12:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:22.277 Cannot find device "nvmf_tgt_br2" 00:26:22.277 15:12:45 -- nvmf/common.sh@155 -- # true 00:26:22.277 15:12:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:22.277 15:12:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:22.277 Cannot find device "nvmf_tgt_br" 00:26:22.277 15:12:45 -- nvmf/common.sh@157 -- # true 00:26:22.277 15:12:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:22.277 Cannot find device "nvmf_tgt_br2" 00:26:22.277 15:12:45 -- nvmf/common.sh@158 -- # true 00:26:22.277 15:12:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:22.277 15:12:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:22.277 15:12:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:22.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:22.277 15:12:45 -- nvmf/common.sh@161 -- # true 00:26:22.277 15:12:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:22.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:22.277 15:12:45 -- nvmf/common.sh@162 -- # true 00:26:22.277 15:12:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:22.277 15:12:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:22.277 15:12:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:22.277 15:12:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:22.536 15:12:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:22.536 15:12:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:22.536 15:12:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:22.536 15:12:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:22.536 15:12:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:22.536 15:12:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:22.536 15:12:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:22.536 15:12:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:22.536 15:12:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:22.536 15:12:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:22.536 15:12:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:22.536 15:12:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:22.536 15:12:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:22.536 15:12:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:22.536 15:12:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:22.536 15:12:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:22.536 15:12:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:22.536 15:12:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:22.536 15:12:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:22.536 15:12:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:22.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:26:22.536 00:26:22.536 --- 10.0.0.2 ping statistics --- 00:26:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.536 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:22.536 15:12:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:22.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:22.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:26:22.536 00:26:22.536 --- 10.0.0.3 ping statistics --- 00:26:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.536 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:22.536 15:12:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:22.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:26:22.536 00:26:22.536 --- 10.0.0.1 ping statistics --- 00:26:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.536 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:22.536 15:12:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.536 15:12:46 -- nvmf/common.sh@421 -- # return 0 00:26:22.536 15:12:46 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:22.536 15:12:46 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:23.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:23.472 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:23.472 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:23.472 15:12:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.472 15:12:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:23.472 15:12:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:23.472 15:12:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.472 15:12:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:23.472 15:12:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:23.472 15:12:46 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:23.472 15:12:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:23.472 15:12:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:23.472 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.472 15:12:46 -- nvmf/common.sh@469 -- # nvmfpid=103588 00:26:23.472 15:12:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:23.472 15:12:46 -- nvmf/common.sh@470 -- # waitforlisten 103588 00:26:23.472 15:12:46 -- common/autotest_common.sh@829 -- # '[' -z 103588 ']' 00:26:23.472 15:12:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.472 15:12:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:23.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.472 15:12:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.472 15:12:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:23.472 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:26:23.472 [2024-11-18 15:12:47.041421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:23.472 [2024-11-18 15:12:47.041513] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.733 [2024-11-18 15:12:47.190988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.733 [2024-11-18 15:12:47.264174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:23.733 [2024-11-18 15:12:47.264385] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.733 [2024-11-18 15:12:47.264404] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.733 [2024-11-18 15:12:47.264417] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.733 [2024-11-18 15:12:47.264531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.733 [2024-11-18 15:12:47.265021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.733 [2024-11-18 15:12:47.265736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.733 [2024-11-18 15:12:47.265789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.725 15:12:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:24.725 15:12:48 -- common/autotest_common.sh@862 -- # return 0 00:26:24.725 15:12:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:24.725 15:12:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:24.725 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:26:24.725 15:12:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:24.725 15:12:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:24.725 15:12:48 -- scripts/common.sh@312 -- # local nvmes 00:26:24.725 15:12:48 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:24.725 15:12:48 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:24.725 15:12:48 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:24.725 15:12:48 -- scripts/common.sh@297 -- # local bdf= 00:26:24.725 15:12:48 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:24.725 15:12:48 -- scripts/common.sh@232 -- # local class 00:26:24.725 15:12:48 -- scripts/common.sh@233 -- # local subclass 00:26:24.725 15:12:48 -- scripts/common.sh@234 -- # local progif 00:26:24.725 15:12:48 -- scripts/common.sh@235 -- # printf %02x 1 00:26:24.725 15:12:48 -- scripts/common.sh@235 -- # class=01 00:26:24.725 15:12:48 -- scripts/common.sh@236 -- # printf %02x 8 00:26:24.725 15:12:48 -- scripts/common.sh@236 -- # subclass=08 00:26:24.725 15:12:48 -- scripts/common.sh@237 -- # printf %02x 2 00:26:24.725 15:12:48 -- scripts/common.sh@237 -- # progif=02 00:26:24.725 15:12:48 -- scripts/common.sh@239 -- # hash lspci 00:26:24.725 15:12:48 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:24.725 15:12:48 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:24.725 15:12:48 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:24.725 15:12:48 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:24.725 15:12:48 -- scripts/common.sh@244 -- # tr -d '"' 00:26:24.725 15:12:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:24.725 15:12:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:24.725 15:12:48 -- scripts/common.sh@15 -- # local i 00:26:24.725 15:12:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:24.725 15:12:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:24.725 15:12:48 -- scripts/common.sh@24 -- # return 0 00:26:24.725 15:12:48 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:24.725 15:12:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:24.725 15:12:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:24.725 15:12:48 -- scripts/common.sh@15 -- # local i 00:26:24.725 15:12:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:24.725 15:12:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:24.725 15:12:48 -- scripts/common.sh@24 -- # return 0 00:26:24.725 15:12:48 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:24.725 15:12:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:24.725 15:12:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:24.725 15:12:48 -- scripts/common.sh@322 -- # uname -s 00:26:24.725 15:12:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:24.725 15:12:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:24.725 15:12:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:24.725 15:12:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:24.725 15:12:48 -- scripts/common.sh@322 -- # uname -s 00:26:24.725 15:12:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:24.725 15:12:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:24.725 15:12:48 -- scripts/common.sh@327 -- # (( 2 )) 00:26:24.725 15:12:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:24.725 15:12:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:24.725 15:12:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:24.725 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:26:24.725 ************************************ 00:26:24.725 START TEST spdk_target_abort 00:26:24.725 ************************************ 00:26:24.725 15:12:48 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:24.725 15:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.725 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:26:24.725 spdk_targetn1 00:26:24.725 15:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.725 15:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.725 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:26:24.725 [2024-11-18 15:12:48.257561] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.725 15:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:24.725 15:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.725 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:26:24.725 15:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:24.725 15:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.725 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:26:24.725 15:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:24.725 15:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.725 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:26:24.725 [2024-11-18 15:12:48.285795] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.725 15:12:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:24.725 15:12:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:24.726 15:12:48 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:28.012 Initializing NVMe Controllers 00:26:28.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:28.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:28.012 Initialization complete. Launching workers. 00:26:28.012 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10972, failed: 0 00:26:28.012 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1078, failed to submit 9894 00:26:28.012 success 736, unsuccess 342, failed 0 00:26:28.012 15:12:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:28.012 15:12:51 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:31.301 [2024-11-18 15:12:54.750377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a05c0 is same with the state(5) to be set 00:26:31.301 Initializing NVMe Controllers 00:26:31.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:31.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:31.301 Initialization complete. Launching workers. 00:26:31.301 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5980, failed: 0 00:26:31.301 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1207, failed to submit 4773 00:26:31.301 success 297, unsuccess 910, failed 0 00:26:31.301 15:12:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:31.301 15:12:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:34.595 Initializing NVMe Controllers 00:26:34.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:34.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:34.595 Initialization complete. Launching workers. 00:26:34.595 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 29711, failed: 0 00:26:34.595 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2647, failed to submit 27064 00:26:34.595 success 398, unsuccess 2249, failed 0 00:26:34.595 15:12:58 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:34.595 15:12:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:26:34.595 15:12:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.595 15:12:58 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:34.595 15:12:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.595 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:26:34.854 15:12:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.854 15:12:58 -- target/abort_qd_sizes.sh@62 -- # killprocess 103588 00:26:34.854 15:12:58 -- common/autotest_common.sh@936 -- # '[' -z 103588 ']' 00:26:34.854 15:12:58 -- common/autotest_common.sh@940 -- # kill -0 103588 00:26:34.854 15:12:58 -- common/autotest_common.sh@941 -- # uname 00:26:34.854 15:12:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:34.854 15:12:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103588 00:26:35.113 15:12:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:35.113 15:12:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:35.113 killing process with pid 103588 00:26:35.113 15:12:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103588' 00:26:35.113 15:12:58 -- common/autotest_common.sh@955 -- # kill 103588 00:26:35.113 15:12:58 -- common/autotest_common.sh@960 -- # wait 103588 00:26:35.113 00:26:35.113 real 0m10.484s 00:26:35.113 user 0m42.929s 00:26:35.113 sys 0m1.737s 00:26:35.113 15:12:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:35.113 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:26:35.113 ************************************ 00:26:35.113 END TEST spdk_target_abort 00:26:35.113 ************************************ 00:26:35.372 15:12:58 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:35.372 15:12:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:35.372 15:12:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:35.372 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:26:35.372 ************************************ 00:26:35.372 START TEST kernel_target_abort 00:26:35.372 ************************************ 00:26:35.372 15:12:58 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:35.372 15:12:58 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:35.372 15:12:58 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:35.372 15:12:58 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:35.372 15:12:58 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:35.372 15:12:58 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:35.372 15:12:58 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:35.372 15:12:58 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:35.372 15:12:58 -- nvmf/common.sh@627 -- # local block nvme 00:26:35.372 15:12:58 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:35.372 15:12:58 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:35.372 15:12:58 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:35.372 15:12:58 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:35.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:35.631 Waiting for block devices as requested 00:26:35.631 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:35.889 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:35.889 15:12:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:35.889 15:12:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:35.889 15:12:59 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:35.889 15:12:59 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:35.889 15:12:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:35.889 No valid GPT data, bailing 00:26:35.889 15:12:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:35.889 15:12:59 -- scripts/common.sh@393 -- # pt= 00:26:35.889 15:12:59 -- scripts/common.sh@394 -- # return 1 00:26:35.889 15:12:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:35.889 15:12:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:35.889 15:12:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:35.889 15:12:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:35.889 15:12:59 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:35.889 15:12:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:35.889 No valid GPT data, bailing 00:26:35.889 15:12:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:35.889 15:12:59 -- scripts/common.sh@393 -- # pt= 00:26:35.889 15:12:59 -- scripts/common.sh@394 -- # return 1 00:26:35.889 15:12:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:35.889 15:12:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:35.889 15:12:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:35.889 15:12:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:35.889 15:12:59 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:35.889 15:12:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:36.147 No valid GPT data, bailing 00:26:36.147 15:12:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:36.147 15:12:59 -- scripts/common.sh@393 -- # pt= 00:26:36.147 15:12:59 -- scripts/common.sh@394 -- # return 1 00:26:36.147 15:12:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:36.147 15:12:59 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:36.147 15:12:59 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:36.147 15:12:59 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:36.147 15:12:59 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:36.147 15:12:59 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:36.147 No valid GPT data, bailing 00:26:36.147 15:12:59 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:36.147 15:12:59 -- scripts/common.sh@393 -- # pt= 00:26:36.147 15:12:59 -- scripts/common.sh@394 -- # return 1 00:26:36.147 15:12:59 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:36.147 15:12:59 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:36.147 15:12:59 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:36.147 15:12:59 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:36.147 15:12:59 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:36.147 15:12:59 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:36.147 15:12:59 -- nvmf/common.sh@654 -- # echo 1 00:26:36.147 15:12:59 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:36.147 15:12:59 -- nvmf/common.sh@656 -- # echo 1 00:26:36.147 15:12:59 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:36.147 15:12:59 -- nvmf/common.sh@663 -- # echo tcp 00:26:36.147 15:12:59 -- nvmf/common.sh@664 -- # echo 4420 00:26:36.147 15:12:59 -- nvmf/common.sh@665 -- # echo ipv4 00:26:36.147 15:12:59 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:36.147 15:12:59 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1906e403-ffd2-4cdd-80f4-a467486e83fd --hostid=1906e403-ffd2-4cdd-80f4-a467486e83fd -a 10.0.0.1 -t tcp -s 4420 00:26:36.147 00:26:36.147 Discovery Log Number of Records 2, Generation counter 2 00:26:36.147 =====Discovery Log Entry 0====== 00:26:36.147 trtype: tcp 00:26:36.147 adrfam: ipv4 00:26:36.147 subtype: current discovery subsystem 00:26:36.147 treq: not specified, sq flow control disable supported 00:26:36.147 portid: 1 00:26:36.147 trsvcid: 4420 00:26:36.147 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:36.147 traddr: 10.0.0.1 00:26:36.147 eflags: none 00:26:36.147 sectype: none 00:26:36.147 =====Discovery Log Entry 1====== 00:26:36.147 trtype: tcp 00:26:36.147 adrfam: ipv4 00:26:36.147 subtype: nvme subsystem 00:26:36.147 treq: not specified, sq flow control disable supported 00:26:36.147 portid: 1 00:26:36.147 trsvcid: 4420 00:26:36.147 subnqn: kernel_target 00:26:36.147 traddr: 10.0.0.1 00:26:36.147 eflags: none 00:26:36.147 sectype: none 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:36.147 15:12:59 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:36.148 15:12:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:39.433 Initializing NVMe Controllers 00:26:39.433 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:39.433 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:39.433 Initialization complete. Launching workers. 00:26:39.433 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 35295, failed: 0 00:26:39.433 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 35295, failed to submit 0 00:26:39.433 success 0, unsuccess 35295, failed 0 00:26:39.433 15:13:02 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:39.433 15:13:02 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:42.720 Initializing NVMe Controllers 00:26:42.720 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:42.720 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:42.720 Initialization complete. Launching workers. 00:26:42.720 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77029, failed: 0 00:26:42.720 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 32789, failed to submit 44240 00:26:42.721 success 0, unsuccess 32789, failed 0 00:26:42.721 15:13:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:42.721 15:13:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:46.007 Initializing NVMe Controllers 00:26:46.007 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:46.007 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:46.007 Initialization complete. Launching workers. 00:26:46.007 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 85056, failed: 0 00:26:46.007 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 21222, failed to submit 63834 00:26:46.007 success 0, unsuccess 21222, failed 0 00:26:46.007 15:13:09 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:46.007 15:13:09 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:46.007 15:13:09 -- nvmf/common.sh@677 -- # echo 0 00:26:46.007 15:13:09 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:46.007 15:13:09 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:46.007 15:13:09 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:46.007 15:13:09 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:46.007 15:13:09 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:46.007 15:13:09 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:46.007 00:26:46.007 real 0m10.533s 00:26:46.007 user 0m5.340s 00:26:46.007 sys 0m2.510s 00:26:46.007 15:13:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:46.007 15:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:46.007 ************************************ 00:26:46.007 END TEST kernel_target_abort 00:26:46.007 ************************************ 00:26:46.007 15:13:09 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:46.007 15:13:09 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:46.007 15:13:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:46.007 15:13:09 -- nvmf/common.sh@116 -- # sync 00:26:46.007 15:13:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:46.007 15:13:09 -- nvmf/common.sh@119 -- # set +e 00:26:46.007 15:13:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:46.007 15:13:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:46.007 rmmod nvme_tcp 00:26:46.007 rmmod nvme_fabrics 00:26:46.007 rmmod nvme_keyring 00:26:46.007 15:13:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:46.007 15:13:09 -- nvmf/common.sh@123 -- # set -e 00:26:46.007 15:13:09 -- nvmf/common.sh@124 -- # return 0 00:26:46.007 15:13:09 -- nvmf/common.sh@477 -- # '[' -n 103588 ']' 00:26:46.007 15:13:09 -- nvmf/common.sh@478 -- # killprocess 103588 00:26:46.007 15:13:09 -- common/autotest_common.sh@936 -- # '[' -z 103588 ']' 00:26:46.007 15:13:09 -- common/autotest_common.sh@940 -- # kill -0 103588 00:26:46.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103588) - No such process 00:26:46.007 Process with pid 103588 is not found 00:26:46.007 15:13:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103588 is not found' 00:26:46.007 15:13:09 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:46.007 15:13:09 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:46.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:46.834 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:46.834 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:46.834 15:13:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:46.834 15:13:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:46.834 15:13:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.834 15:13:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:46.834 15:13:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.834 15:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:46.834 15:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.834 15:13:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:46.834 00:26:46.834 real 0m24.809s 00:26:46.834 user 0m49.822s 00:26:46.834 sys 0m5.725s 00:26:46.834 15:13:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:46.834 ************************************ 00:26:46.834 END TEST nvmf_abort_qd_sizes 00:26:46.834 ************************************ 00:26:46.834 15:13:10 -- common/autotest_common.sh@10 -- # set +x 00:26:46.834 15:13:10 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:46.834 15:13:10 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:26:46.834 15:13:10 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:26:46.834 15:13:10 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:26:46.834 15:13:10 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:26:46.834 15:13:10 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:26:46.834 15:13:10 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:26:46.834 15:13:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:46.834 15:13:10 -- common/autotest_common.sh@10 -- # set +x 00:26:46.834 15:13:10 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:26:46.834 15:13:10 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:26:46.834 15:13:10 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:26:46.834 15:13:10 -- common/autotest_common.sh@10 -- # set +x 00:26:48.739 INFO: APP EXITING 00:26:48.739 INFO: killing all VMs 00:26:48.739 INFO: killing vhost app 00:26:48.739 INFO: EXIT DONE 00:26:49.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:49.676 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:49.676 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:50.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:50.244 Cleaning 00:26:50.244 Removing: /var/run/dpdk/spdk0/config 00:26:50.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:50.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:50.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:50.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:50.244 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:50.244 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:50.244 Removing: /var/run/dpdk/spdk1/config 00:26:50.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:50.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:50.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:50.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:50.244 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:50.244 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:50.504 Removing: /var/run/dpdk/spdk2/config 00:26:50.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:50.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:50.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:50.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:50.504 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:50.504 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:50.504 Removing: /var/run/dpdk/spdk3/config 00:26:50.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:50.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:50.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:50.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:50.504 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:50.504 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:50.504 Removing: /var/run/dpdk/spdk4/config 00:26:50.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:50.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:50.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:50.504 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:50.504 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:50.504 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:50.504 Removing: /dev/shm/nvmf_trace.0 00:26:50.504 Removing: /dev/shm/spdk_tgt_trace.pid67609 00:26:50.504 Removing: /var/run/dpdk/spdk0 00:26:50.504 Removing: /var/run/dpdk/spdk1 00:26:50.504 Removing: /var/run/dpdk/spdk2 00:26:50.504 Removing: /var/run/dpdk/spdk3 00:26:50.504 Removing: /var/run/dpdk/spdk4 00:26:50.504 Removing: /var/run/dpdk/spdk_pid100567 00:26:50.504 Removing: /var/run/dpdk/spdk_pid100768 00:26:50.504 Removing: /var/run/dpdk/spdk_pid101063 00:26:50.504 Removing: /var/run/dpdk/spdk_pid101369 00:26:50.504 Removing: /var/run/dpdk/spdk_pid101920 00:26:50.504 Removing: /var/run/dpdk/spdk_pid101926 00:26:50.504 Removing: /var/run/dpdk/spdk_pid102294 00:26:50.504 Removing: /var/run/dpdk/spdk_pid102455 00:26:50.504 Removing: /var/run/dpdk/spdk_pid102612 00:26:50.504 Removing: /var/run/dpdk/spdk_pid102709 00:26:50.504 Removing: /var/run/dpdk/spdk_pid102869 00:26:50.504 Removing: /var/run/dpdk/spdk_pid102978 00:26:50.504 Removing: /var/run/dpdk/spdk_pid103657 00:26:50.504 Removing: /var/run/dpdk/spdk_pid103697 00:26:50.504 Removing: /var/run/dpdk/spdk_pid103729 00:26:50.504 Removing: /var/run/dpdk/spdk_pid103979 00:26:50.504 Removing: /var/run/dpdk/spdk_pid104013 00:26:50.504 Removing: /var/run/dpdk/spdk_pid104044 00:26:50.504 Removing: /var/run/dpdk/spdk_pid67451 00:26:50.504 Removing: /var/run/dpdk/spdk_pid67609 00:26:50.504 Removing: /var/run/dpdk/spdk_pid67925 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68205 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68383 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68466 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68565 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68667 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68700 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68741 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68804 00:26:50.504 Removing: /var/run/dpdk/spdk_pid68922 00:26:50.504 Removing: /var/run/dpdk/spdk_pid69554 00:26:50.504 Removing: /var/run/dpdk/spdk_pid69613 00:26:50.504 Removing: /var/run/dpdk/spdk_pid69682 00:26:50.504 Removing: /var/run/dpdk/spdk_pid69710 00:26:50.504 Removing: /var/run/dpdk/spdk_pid69789 00:26:50.504 Removing: /var/run/dpdk/spdk_pid69817 00:26:50.504 Removing: /var/run/dpdk/spdk_pid69915 00:26:50.504 Removing: /var/run/dpdk/spdk_pid69943 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70000 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70031 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70077 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70107 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70266 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70296 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70385 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70450 00:26:50.504 Removing: /var/run/dpdk/spdk_pid70475 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70533 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70553 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70587 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70601 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70641 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70655 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70690 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70709 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70740 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70760 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70794 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70814 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70847 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70862 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70898 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70917 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70953 00:26:50.763 Removing: /var/run/dpdk/spdk_pid70967 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71007 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71021 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71050 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71075 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71104 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71124 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71158 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71172 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71212 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71226 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71261 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71280 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71309 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71329 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71363 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71386 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71423 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71440 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71478 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71497 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71532 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71551 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71587 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71664 00:26:50.763 Removing: /var/run/dpdk/spdk_pid71782 00:26:50.763 Removing: /var/run/dpdk/spdk_pid72215 00:26:50.763 Removing: /var/run/dpdk/spdk_pid79183 00:26:50.763 Removing: /var/run/dpdk/spdk_pid79541 00:26:50.763 Removing: /var/run/dpdk/spdk_pid81986 00:26:50.763 Removing: /var/run/dpdk/spdk_pid82368 00:26:50.763 Removing: /var/run/dpdk/spdk_pid82636 00:26:50.763 Removing: /var/run/dpdk/spdk_pid82683 00:26:50.763 Removing: /var/run/dpdk/spdk_pid82996 00:26:50.763 Removing: /var/run/dpdk/spdk_pid83047 00:26:50.763 Removing: /var/run/dpdk/spdk_pid83432 00:26:50.763 Removing: /var/run/dpdk/spdk_pid83961 00:26:50.763 Removing: /var/run/dpdk/spdk_pid84391 00:26:50.763 Removing: /var/run/dpdk/spdk_pid85330 00:26:50.763 Removing: /var/run/dpdk/spdk_pid86324 00:26:50.763 Removing: /var/run/dpdk/spdk_pid86443 00:26:50.763 Removing: /var/run/dpdk/spdk_pid86505 00:26:50.763 Removing: /var/run/dpdk/spdk_pid87993 00:26:50.763 Removing: /var/run/dpdk/spdk_pid88235 00:26:50.763 Removing: /var/run/dpdk/spdk_pid88682 00:26:50.763 Removing: /var/run/dpdk/spdk_pid88795 00:26:50.763 Removing: /var/run/dpdk/spdk_pid88942 00:26:50.763 Removing: /var/run/dpdk/spdk_pid88988 00:26:50.763 Removing: /var/run/dpdk/spdk_pid89033 00:26:50.763 Removing: /var/run/dpdk/spdk_pid89079 00:26:50.763 Removing: /var/run/dpdk/spdk_pid89242 00:26:50.763 Removing: /var/run/dpdk/spdk_pid89389 00:26:50.763 Removing: /var/run/dpdk/spdk_pid89654 00:26:50.763 Removing: /var/run/dpdk/spdk_pid89777 00:26:50.763 Removing: /var/run/dpdk/spdk_pid90202 00:26:50.763 Removing: /var/run/dpdk/spdk_pid90582 00:26:50.763 Removing: /var/run/dpdk/spdk_pid90594 00:26:50.763 Removing: /var/run/dpdk/spdk_pid92854 00:26:50.763 Removing: /var/run/dpdk/spdk_pid93166 00:26:50.764 Removing: /var/run/dpdk/spdk_pid93681 00:26:50.764 Removing: /var/run/dpdk/spdk_pid93684 00:26:50.764 Removing: /var/run/dpdk/spdk_pid94027 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94045 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94060 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94091 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94096 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94240 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94249 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94352 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94359 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94462 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94468 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94947 00:26:51.022 Removing: /var/run/dpdk/spdk_pid94996 00:26:51.022 Removing: /var/run/dpdk/spdk_pid95147 00:26:51.022 Removing: /var/run/dpdk/spdk_pid95274 00:26:51.022 Removing: /var/run/dpdk/spdk_pid95670 00:26:51.022 Removing: /var/run/dpdk/spdk_pid95922 00:26:51.022 Removing: /var/run/dpdk/spdk_pid96422 00:26:51.022 Removing: /var/run/dpdk/spdk_pid96987 00:26:51.022 Removing: /var/run/dpdk/spdk_pid97477 00:26:51.022 Removing: /var/run/dpdk/spdk_pid97548 00:26:51.022 Removing: /var/run/dpdk/spdk_pid97638 00:26:51.022 Removing: /var/run/dpdk/spdk_pid97724 00:26:51.022 Removing: /var/run/dpdk/spdk_pid97888 00:26:51.022 Removing: /var/run/dpdk/spdk_pid97976 00:26:51.022 Removing: /var/run/dpdk/spdk_pid98070 00:26:51.022 Removing: /var/run/dpdk/spdk_pid98155 00:26:51.022 Removing: /var/run/dpdk/spdk_pid98513 00:26:51.022 Removing: /var/run/dpdk/spdk_pid99213 00:26:51.022 Clean 00:26:51.022 killing process with pid 61847 00:26:51.022 killing process with pid 61852 00:26:51.023 15:13:14 -- common/autotest_common.sh@1446 -- # return 0 00:26:51.023 15:13:14 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:26:51.023 15:13:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:51.023 15:13:14 -- common/autotest_common.sh@10 -- # set +x 00:26:51.282 15:13:14 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:26:51.282 15:13:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:51.282 15:13:14 -- common/autotest_common.sh@10 -- # set +x 00:26:51.282 15:13:14 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:51.282 15:13:14 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:51.282 15:13:14 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:51.282 15:13:14 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:26:51.282 15:13:14 -- spdk/autotest.sh@383 -- # hostname 00:26:51.282 15:13:14 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:51.540 geninfo: WARNING: invalid characters removed from testname! 00:27:13.473 15:13:34 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:14.045 15:13:37 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:16.578 15:13:39 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:18.482 15:13:41 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:20.461 15:13:43 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:22.364 15:13:45 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:24.899 15:13:47 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:24.899 15:13:47 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:24.899 15:13:47 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:24.899 15:13:47 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:24.899 15:13:48 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:24.899 15:13:48 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:24.899 15:13:48 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:24.899 15:13:48 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:24.899 15:13:48 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:24.899 15:13:48 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:24.899 15:13:48 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:24.899 15:13:48 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:24.899 15:13:48 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:24.899 15:13:48 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:24.899 15:13:48 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:24.899 15:13:48 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:24.899 15:13:48 -- scripts/common.sh@343 -- $ case "$op" in 00:27:24.899 15:13:48 -- scripts/common.sh@344 -- $ : 1 00:27:24.899 15:13:48 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:24.899 15:13:48 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.899 15:13:48 -- scripts/common.sh@364 -- $ decimal 1 00:27:24.899 15:13:48 -- scripts/common.sh@352 -- $ local d=1 00:27:24.899 15:13:48 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:24.899 15:13:48 -- scripts/common.sh@354 -- $ echo 1 00:27:24.899 15:13:48 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:24.899 15:13:48 -- scripts/common.sh@365 -- $ decimal 2 00:27:24.899 15:13:48 -- scripts/common.sh@352 -- $ local d=2 00:27:24.899 15:13:48 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:24.899 15:13:48 -- scripts/common.sh@354 -- $ echo 2 00:27:24.899 15:13:48 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:24.899 15:13:48 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:24.899 15:13:48 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:24.899 15:13:48 -- scripts/common.sh@367 -- $ return 0 00:27:24.899 15:13:48 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.899 15:13:48 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:24.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.899 --rc genhtml_branch_coverage=1 00:27:24.899 --rc genhtml_function_coverage=1 00:27:24.899 --rc genhtml_legend=1 00:27:24.899 --rc geninfo_all_blocks=1 00:27:24.899 --rc geninfo_unexecuted_blocks=1 00:27:24.899 00:27:24.899 ' 00:27:24.899 15:13:48 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:24.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.899 --rc genhtml_branch_coverage=1 00:27:24.899 --rc genhtml_function_coverage=1 00:27:24.899 --rc genhtml_legend=1 00:27:24.899 --rc geninfo_all_blocks=1 00:27:24.899 --rc geninfo_unexecuted_blocks=1 00:27:24.899 00:27:24.899 ' 00:27:24.899 15:13:48 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:24.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.899 --rc genhtml_branch_coverage=1 00:27:24.899 --rc genhtml_function_coverage=1 00:27:24.899 --rc genhtml_legend=1 00:27:24.899 --rc geninfo_all_blocks=1 00:27:24.899 --rc geninfo_unexecuted_blocks=1 00:27:24.899 00:27:24.899 ' 00:27:24.899 15:13:48 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:24.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.899 --rc genhtml_branch_coverage=1 00:27:24.899 --rc genhtml_function_coverage=1 00:27:24.899 --rc genhtml_legend=1 00:27:24.899 --rc geninfo_all_blocks=1 00:27:24.899 --rc geninfo_unexecuted_blocks=1 00:27:24.899 00:27:24.899 ' 00:27:24.899 15:13:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:24.899 15:13:48 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:24.899 15:13:48 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.899 15:13:48 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.899 15:13:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.899 15:13:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.899 15:13:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.899 15:13:48 -- paths/export.sh@5 -- $ export PATH 00:27:24.899 15:13:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.899 15:13:48 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:24.899 15:13:48 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:24.899 15:13:48 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731942828.XXXXXX 00:27:24.899 15:13:48 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731942828.q8YuOZ 00:27:24.899 15:13:48 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:24.899 15:13:48 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:24.899 15:13:48 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:24.899 15:13:48 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:24.899 15:13:48 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:24.899 15:13:48 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:24.899 15:13:48 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:24.899 15:13:48 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:24.899 15:13:48 -- common/autotest_common.sh@10 -- $ set +x 00:27:24.899 15:13:48 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:24.899 15:13:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:24.899 15:13:48 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:24.899 15:13:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:24.899 15:13:48 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:24.899 15:13:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:24.899 15:13:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:24.899 15:13:48 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:24.899 15:13:48 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:24.899 15:13:48 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:24.899 15:13:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:24.899 + [[ -n 5974 ]] 00:27:24.899 + sudo kill 5974 00:27:24.909 [Pipeline] } 00:27:24.924 [Pipeline] // timeout 00:27:24.928 [Pipeline] } 00:27:24.942 [Pipeline] // stage 00:27:24.946 [Pipeline] } 00:27:24.963 [Pipeline] // catchError 00:27:24.973 [Pipeline] stage 00:27:24.975 [Pipeline] { (Stop VM) 00:27:24.989 [Pipeline] sh 00:27:25.271 + vagrant halt 00:27:28.556 ==> default: Halting domain... 00:27:35.136 [Pipeline] sh 00:27:35.415 + vagrant destroy -f 00:27:38.000 ==> default: Removing domain... 00:27:38.271 [Pipeline] sh 00:27:38.553 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:38.562 [Pipeline] } 00:27:38.578 [Pipeline] // stage 00:27:38.585 [Pipeline] } 00:27:38.599 [Pipeline] // dir 00:27:38.605 [Pipeline] } 00:27:38.621 [Pipeline] // wrap 00:27:38.627 [Pipeline] } 00:27:38.641 [Pipeline] // catchError 00:27:38.651 [Pipeline] stage 00:27:38.654 [Pipeline] { (Epilogue) 00:27:38.668 [Pipeline] sh 00:27:38.954 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:43.158 [Pipeline] catchError 00:27:43.160 [Pipeline] { 00:27:43.173 [Pipeline] sh 00:27:43.454 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:43.712 Artifacts sizes are good 00:27:43.722 [Pipeline] } 00:27:43.740 [Pipeline] // catchError 00:27:43.754 [Pipeline] archiveArtifacts 00:27:43.762 Archiving artifacts 00:27:43.905 [Pipeline] cleanWs 00:27:43.922 [WS-CLEANUP] Deleting project workspace... 00:27:43.922 [WS-CLEANUP] Deferred wipeout is used... 00:27:43.944 [WS-CLEANUP] done 00:27:43.946 [Pipeline] } 00:27:43.961 [Pipeline] // stage 00:27:43.967 [Pipeline] } 00:27:43.983 [Pipeline] // node 00:27:43.988 [Pipeline] End of Pipeline 00:27:44.027 Finished: SUCCESS